Harnad, S. (2003b) Categorical Perception. Encyclopedia of Cognitive Science. Nature Publishing Group. Macmillan.
Differences can be perceived as gradual and quantitative, as with different shades of gray, or they can be perceived as more abrupt and qualitative, as with different colors. The first is called continuous perception and the second categorical perception. Categorical perception (CP) can be inborn or can be induced by learning. Formerly thought to be peculiar to speech and color perception, CP turns out to be far more general, and may be related to how the neural networks in our brains detect the features that allow us to sort the things in the world into their proper categories, "warping" perceived similarities and differences so as to compress some things into the same category and separate others into different categories.
Pérez-Gay Juárez, F., Sicotte, T., Thériault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlates. PloS One, 14(12), e0226000.
Pérez-Gay Juárez, F., Sicotte, T., Thériault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlates. PloS One, 14(12), e0226000.
Pullum, G. K. (1989). The great Eskimo vocabulary hoax. Natural Language & Linguistic Theory, 275-281.

NOTE TO EVERYONE: Before posting, please always read the other commentaries in the thread (and especially my replies) so you don't just repeat the same thing.
ReplyDelete**BLOGGER BUG**: ONCE THE NUMBER OF COMMENTS REACHES 200 OR MORE {see the count, at the beginning of the commentaries] YOU CAN STILL MAKE COMMENTS, BUT TO SEE YOUR COMMENT AFTER YOU HAVE PUBLISHED IT YOU NEED TO SCROLL DOWN TO ALMOST THE BOTTOM OF THE PAGE and click: “Load more…”
Delete________________
Load more…
________________
——
After 200 has been exceeded EVERYONE has to scroll down and click “Load more” each time they want to see all the posts (not just the first 200), and they also have to do that whenever they want to add another comment or reply after 200 has been exceeded.
If you post your comment really late, I won’t see it, and you have to email me the link so I can find it. Copy/Paste it from the top of your published comment, as it appears right after your name, just as you do when you email me your full set of copy-pasted commentaries before the mid-term and before the final.
——
WEEK 5: Week 5 is an important week and topic. There is only one topic thread, but please read at least two of the readings, and do at least two skies. I hope Week 5 will be the only week in which we have the 200+ overflow problem, because there are twice the usual number of commentaries: 88 skies + 88 skies + my 176 replies = 352!. In every other week it’s 2 separate topic threads, each with 88 skies plus my 88 replies (plus room for a few follow-ups when I ask questions.
Colors, which are innate categorical perception (CP), arise from the continuous physical phenomenon of light wavelength. Does this transition from continuous to discrete/categorical occur within the eye as a physiological mechanism (via cones sensitive to specific wavelengths) or within the brain as a higher-level cognitive process, or perhaps a combination of both?
ReplyDeleteIn cases of color blindness, individuals lack a specific type of cone (typically green-red cones), making it challenging to differentiate colors along the green-red axis. Can we deduce, then, that CP for color is rooted in the physiological process of light transduction? If this holds true, and assuming the prevalent representation of deuteranopia vision (where the world is perceived in shades of blue and yellow) is accurate, these individuals would possess only two categories of color perception: blue and yellow. What appears red to those with normal vision would be perceived as a particular shade of yellow/blue, and what we perceive as green would register as another shade of yellow/blue. Consequently, for color-blind individuals, the perception of red and green becomes a learned continuous perception (between different shades of yellow/blue), rendering it more challenging for them to distinguish colors along the green-red axis.
Does my argument hold despite the oversimplification of the human color perception mechanism?
It's over-simplified. We see colors because of several levels of feature-detection in the retina and the rest of the brain. (The retina is actually part of the brain.)
DeleteLight-receptors (cones) that respond continuously to wave-length have peaks of sensitivity at a point in their continuum -- some in the long wave-length range, some in the medium and some in the short. The colors we see are ratios of the activity of these cones. Then at a higher level in the brain, the input from the Red peak receptors is paired in an "opponent" process with the input from the Green receptors. Similar pairing for Blue with Yellow. The opponent processes inhibit one another. (That's partly why after staring at Green for a while, and then looking at Grey, it looks Green.)
In any case, color perception is somewhat complicated. You'll get an idea from the NLM reference link above. You can also query ChatGPT (but beware, because it could give you wrong information). For color-blindness ask it what dichromacy looks like. And google for images of what lacking long, medium or short wave length receptors would look like.
These feature-detectors, including CP, have evolved across 600 to 100 million years ago (mammals are the most recent, and not the most sensitive).
Learned CP is much weaker than innate CP, and can occur in vision, hearing and probably the other sense modalities too (e.g., in wine-tasters and perfume-makers).
In this article, I found the emphasis on the mutual influence between categorical perception and our response really relevant. Stimulus to which we learn to make the same response become more similar with one another and stimuli to which we learn to elicit a different response become more distinct. This problem was addressed through language and the distinction between pa/ba and ba/da syllables but it seems to me that we could apply this to other modalities, such as visual perception of colors or faces. As we saw with the Whorf Hypothesis, culture doesn’t seem to have an influence on color perception, or at least the categorical perception of colors. Nevertheless, we tend to have more difficulties to differentiate faces in ethnicity that we are not familiar with. Maybe the neural model of categorical perception could explain this. By growing in a certain environment, we learn to differentiate elements based on specific invariants that might vary among cultures. For example, Caldara (2017) showed that eastern cultures tend to fixate more on the central region of a face during a face-processing task, whereas western cultures emphasize more on the eye and mouth region.
ReplyDelete(here is a link to the article: https://journals.sagepub.com/doi/abs/10.1177/0963721417710036)
I was also interested by the motor theory of speech perception. The reasoning that we cannot perceive a mixture of 'ba' and 'pa' because we personally cannot produce a sound in between makes sense and reminded me of the McGurk affect and mirror neurons. In the McGurk affect, our visual perception affects our auditory perception, for example when we watch someone mouth 'ba' even if 'fa' is playing, we will most likely here 'ba'. This led me to I wonder if mirror neurons are involved in this affect by activating the parts of our brain necessary to move our own mouth in the position we are seeing. I also wondered if mirror neurons have a role to play in the motor theory of speech. This would assume mirror neurons could be activated not just by seeing, but also by hearing.
DeleteAnother question I had involves the 'switch' moment(hearing ba suddenly instead of pa), which seems to vary person to person. Is this because of the quality of our machinery(the ears themselves) or does it have something to do with categorization?
Adrien & Josie, good comments.
DeleteYes, there is learned visual CP too, and where perception/production mirroring is possible, as in speech, it plays a role too. It reduces the burden on learning, because the mirror-capacity is partly inborn (though that probably involves some learning too).
The exact location of the category boundary may vary (temporarily, as in short-term habitation from repetition, and in the long-term, if the context of confusable alternatives changes). But most learned categories do not involve points along a sensorimotor continuum. Yet in the neural net models of category learning and CP, the changes in the hidden-layer internal representations as the net learns can be analyzed as points in a continuous space.
(For more on the McGurk effect and mirror-neurons, see reply to Mlica below.)
DeleteJosie this also made me think of mirror neurons and their connection to categories. We can only categorize things we understand, and we can only understand them because we ourselves are able to act upon them or somehow ground our understanding .
DeleteI think the last part of this reading was the most interesting part. I find it curious how we determine the “most abstract” categories like goodness, and or badness. We all have a different understanding of what is good and what is bad, meaning people could categorize different things to be in these categories, but I’m assuming we all have the same idea of what it is, just categorize it differently (subjectively). From the previous reading, I was confused about non-innate categorization, or learned categorization, so would this be the learned/non-innate categorization since we don’t rely on our sensorimotor capacities?
ReplyDeleteYes, I also wondered about how we would determine an abstract category like "goodness" as well! I believe that this would be a learned categorization since it is not innate but rather something that we learn from feedback (teachers, parents, etc) in childhood. However, I also wonder if something akin to the motor theory of speech perception can be applied here as well. As you said, different people perceive "goodness" differently. Perhaps we recognize something like "goodness" the same way we produce it (i.e., we recognize an action as "good" if it is an action that we would produce with the same intention.)
DeleteSelin, children get plenty of corrective feedback on what they do that's good or bad.
DeleteAll categories are abstract, because you need to learn to abstract the features that distinguish the members from the nonmembers.
We may differ in what we like to eat, but we learn that, just like on the mushroom island.
"Idea" is a weasel-word (and usually means "category").
Feelings are sensorimotor. We learn what kinds of things cause pain, but we don't have to learn what pain feels like.
Ohrie, yes, we learn (from corrective feedback) what our parents consider it bad for us to do, but we don't have to learn the difference between what it feels like to be sad or happy.
Yes, some things we already know from our mirror-capacities; but some people seem to learn the Golden Rule and some don't.
Ohrie, I think this relates to the other-minds problem. Just as you cannot know if someone else experiences sentience like you do, or at all, there are certain categories which rely on sensorimotor or cognitive perception that I cannot know for sure are experienced the same way by other people. For example, someone who has a defect in their eye cones may perceive colours differently than me but have learned to categorize their perception of colours using the same names as me - so my ‘blue’ is perceived by them as my ‘yellow’ but we both refer to the colour as blue. Expanding upon this idea, I am confused about how we are able to learn categories and names for internal states which are individual. While my friend and I may be able to look at the same object and both experience seeing it as red, and label it so, I cannot look at them and experience their emotional states. My question is how, then, do we first learn to categorize our emotional and internal states as children? When I was very young did I cry and my mom told me I was feeling ‘sad’? Couldn’t one cry for a variety of reasons, what if I had been crying because I was scared or angry, etc? Perhaps there are aspects of emotion which are universal and inherent - facial expressions, body movements, etc. Would it be possible to explain the concept of ‘sad’ to someone who had never experienced it?
DeleteI agree with the Zoe that certain categories. which rely on sensorimotor or CP cannot know for sure to be experienced the same way by all people. It is even inevitable that people with no defect in colour detecting will perceive differently in colour due to variations in their iris colours. Light irises can absorb more light than dark irises, which is why many restaurants in the Western and Eastern cuisines use different lighting background for the dining experience. While reading these two readings, I did not come across words like ‘standard’ or ‘rule’, before I believed that people perceive differently due to one of the facts that they have different standards or rules to apply, which others may not understand or share the same standards. However, terms like ‘standard’, ‘rule’, or ‘principles’ all appear to be weasel-word. Since the definition for categorization is to do the right thing with the right kind of thing, is the standard equivalent to the categorization?
DeleteThis reading discusses the compression of within-category differences and the separation of between-category differences, which are trademark effects of CP. This made me think about the evolutionary importance of compression and separation, especially since the purpose of categorization is to do the right thing with the right kind of thing. For example, on mushroom island, it is much more important to be able to distinguish between two mushrooms of different species (one which is edible, the other poisonous), than to be able to distinguish between two mushrooms within the same species, which highlights the function of separation. I was wondering if compression within categories is then a result of perceptual narrowing (the tuning of perceptual mechanisms to relevant sensory input), reflecting the fact that our brains have limited capacity for what we can accurately distinguish. I also wonder if it is possible to have cases of separation between categories without having compression within categories?
ReplyDeleteI agree with the argument of the importance of separation between-category differences as it is intuitively evolutionary – bad vs good, safe vs dangerous, edible vs poisonous. Even more, this compression of within-category differences and the separation of between-category differences is a phenomenon that has been studied in other fields of psychology (e.g. stereotypes). However, it was found in a study that people from a different group than ours are seen as more similar individuals (will less individuality/singularity). Would that be possible to translate in terms of word categories? Do more familiar categories have more fine-tuning than others? How would that benefit us?
DeleteJessica, with innate or obvious categories, like black and white, neither separation nor compression is needed to distinguish them.
DeleteWhen the members are harder to tell apart, between-category separation is important.
Within-category compression does not always occur; sometimes it's just no separation or less separation, compared to between categories.
Separation happens because the distinguishing features stand out when you have learned to detect them to tell different categories apart.
Garance, what benefits us is distinguishing the features of the kinds of things you should do THIS with from the features of the kinds of things you should to THAT with.
If the differences don't matter (no uncertainty to be reduced), all differences can get a little enhanced through unsupervised learning. (What's that?)
Unsupervised learning is done through observation without error correcting feedback, such as copying a hand movement that was observed. Unsupervised learning can only highlight feature/feature correlations and is better used when members and nonmembers are more salient and easy to differentiate. The cognizer will be able to correctly categorize (doing the right thing with the right kind of thing) based on just observation and repeated exposure in the environment. In the case of mushroom island, unsupervised learning would be just looking at the feature correlations between mushrooms (ex. all the red capped ones grow near trees) but this would not tell you which mushroom is poisonous (that would require supervised learning or a teacher).
DeleteIn phonetics class, we learned that different languages define the same phoneme along different voice onset times (VOT). For example, /b/ in English has a VOT around +50 ms, while /b/ in French is around -100 ms (from my LING 330 notes). Interestingly, /p/ in French has a similar VOT to /b/ in English. I understand that as speakers of a language, we are able to categorize the sounds along the continuum correctly despite the cross-linguistic ambiguity because of learned CP. As we learn a language, we hear the different sounds and learn the differences, whether by instruction or by the context, respond to them differently, and, as per Lawrence in the reading, they become more distinct.
ReplyDeleteOmar, some phoneme boundaries are learned, with the help of imitation, sensorimotor mirror capacity, unsupervised and supervised learning and motor production boundaries (how?). But little from verbal instruction...
DeleteIn my experience in linguistics classes, it has been interesting to experience the difference between the phonemes that feel innate to me (i.e. the phonemes that I have used before I even knew what a phoneme was) and the phonemes that I haven’t heard frequently or that are used in a different context than I would normally use them. In learning the IPA chart, trying to use phonemes in a different way (from verbal instruction of a professor) is very challenging. This feeling difference reminds me of the example that Professor Harnad uses when he will switch to use Hungarian mid-sentence, and we feel the difference in understanding versus not. Even people who do not have linguistic training can still feel when a phoneme is used in a place where they would not use it in their own native dialect (the basis of accents).
DeleteCategorical perception aims to reduce the uncertainty about what to do with what, but the categories are only approximate. This was easier to grasp when I thought of subjective categories like "love" as we don't all use the same features to describe them. However, objective categories are also approximate (ex: reptiles, colours, etc). Describing a category verbally brings us closer to recognizing categories with features that are shared by the members of the same category that distinguish them from members of different categories. Although language allows for a sufficient approximation, it can always be modified and improved with more words.
ReplyDeleteThe idea about "reduce uncertainty" through categorical perception pause me to read and think for a while. I am thinking whether only the learned CP has this capacity, as the innate CP seems to be static and not implementation-independent.
DeleteFor this reason, I am wondering whether we should treat innate and learned CP separately, or CP owns the innate part and extended through learning. It seems like I have misunderstood something,
unrelated side-notes which is open to be refuted: so based on my understanding, the innate CP is less important for this course - at least so far - as we are focusing on something dynamic, on computation and cognition.
Except in the case of mathematical categories (such as "even numbers" vs."odd numbers," whose features may be exact and exhaustive by formal definition) all categories are approximate. Even on the mushroom island, you could one day run into mushrooms for which the features that have safely sorted the edible ones from the poisonous ones turn out not to be enough, and you have to find more to tighten the approximation. This is true whether the distinguishing features are learned directly, through sensorimotor trial and error, or indirectly, through verbal instruction, and whether the categories are objective or just a matter of taste. Any of them could turn out to need updating.
DeleteWhat stuck out to me the most was the ending sentiment of how some of our categories must originate from a source beyond the sensory motor experience. Would acquiring categories and their respective CP through language alone be a demonstration of the nuclear power of language? Would the baseline “grounded” categories that can be built upon for higher order ones be made up of the kernel or a vocabulary even as small as the minimal grounding set?
ReplyDeleteI would also like clarification of how the kernel and the minimal grounding set differ and how a word could be part of the former but not the latter.
If the mushrooms' sensorimotor features turn out to need updating, so do any verbal descriptions of features that are based on them. Uncertainty can be reduced; but except in formal mathematics and logic, based purely on definition, uncertainty can't be eliminated. (Remember Descartes on doubt and certainty.)
DeleteWe'll discuss Minsets more in Week 8b. A dictionary is a set of words in which every word can be defined from words that are in the set. The Kernel is a dictionary within the dictionary. It can not only define all the words inside it, but also all the words outside it, in the rest of the dictionary. The Minset, in contrast, is not a dictionary at all. Its words can only define what's outside it. (We discussed this a little in class: why is this so? Test your explanation on your own kid-siblings!)
But don't mix up categorization itself with CP, the perceptual change that may or may not occur when you learn a new category through its features. Explain that difference, too, to your kid-siblings...)
In the “Categorical Perception” reading, the section on motor theory of speech perception particularly grabbed my attention, in which the analysis of sound spectrogram of certain sounds, such as ”ba” and ”pa” which lie along an acoustic continuum (voice-onset-time) is used to address wether speech sounds are innate or learned, or rather wether they are categorical categories at all. It was found that in the case of “ba” and “pa”, or other sounds that change across a voicing continuum, are only perceived as either ba” or “pa”, and nothing in between, this referring to categorical perception, unique to speech, and the basis of the motor theory of speech perception. This theory assumes that we perceive either ba” or “pa”, based on the way sounds are heard, influenced by the way they are produced during speech. In other words, the motor theory of speech perception states that sensory perception is mediated by motor perception. However, there are instances in which speech perception is not solely dependent on auditory information or categorical boundaries. An example of this is portrayed by the McGurk effect, which describes situations where there is a mismatch between the visual and auditory cues in speech perception. Indeed, when a person hears a sound while seeing a speaker pronounce a different sound at the same time, the brain often integrates the two modalities to perceive a third, intermediate sound. For example, if you hear "ba" but see someone saying "ga," you might perceive the sound as "da.” This effect highlights that speech perception is not always dependent on auditory information alone or categorical boundaries, but rather on a continuous interaction between sensory modalities, thus occasionally leading to perception that falls outside of strict categorical boundaries. I assume that cases such as the McGurk effect were involved in the rejection of the motor theory of speech perception, along with the new consensus on the fact that categorical perception actually occurs at every instance where perceived within-category differences are compressed or between-category differences are separated, based on a baseline of comparison.
ReplyDeleteCP for ba/da/ga is a "mirror-neuron" effect for hearing and pronouncing speech sounds. But, if you think of it, so is the McGurk effect, except it also includes seeing the sound pronounced. For deaf people, that's the only perceptual side of speech production.
DeleteIf you artificially induce a contradiction between the two for hearing people when they view themselves producing the sound, showing a lip position that cannot really produce the heard sound, it's like looking into a distorting mirror that makes you see your hand rising when you're lowering it.
In that case, all bets are off. If you saw that every day, perception would change, as in the displacing prism experiments, and what looks like leftward movement would come to feel like rightward movement.
But the McGurk effect is a complex distortion that cannot be made chronic like the prism distortion, except maybe in a Virtual Reality (VR) illusion, where electrical activity from your lips and tongue for ba and da are systematically transformed while you look at yourself talking in a VR pseudo-mirror. But to make that stick you would have to create the same VR distortion whenever anyone else says ba or da...
So whereas that might have some distant relation to speech CP, it's rather at odds with the adaptive function of mirror-neuron capacity, and probably unrelated to CP for categories without a mirror (perception/production) dimension.
The Whorf Hypothesis argues that colors are perceived categorically because they are named categorically in languages. For instance, for people with no color or light perception impairment, light blue is quickly distinguishable from dark blue. This hypothesis is supported by Winawer et al.’s (2007) study which shows that Russian speakers tend to be faster than English speakers at discriminating between light blues and dark blues, due to language categorization differences. Russian speakers are taught from an early age to distinguish light blue as “goluboy” and dark blue as “siniy”. Since this discrimination is due to a learned association from a young age, this experiment supports learned CP sensorimotor effects.
ReplyDeleteAnaïs, the Strong Whorf-Sapir Hypothesis is wrong for basic colors. The rainbow is perceived the same way regardless of language, naming and learning. The Russian "goloboy/siniy" boundary, however, like the Hungarian "vörös/piros" (blood-red/rose-red) boundary, is an example of the Weak Whorf-Sapir effect now called learned CP.
DeleteIf there is a distinct difference between the weaker categorical boundaries set through learned CP, and the stronger, more innate boundaries granted by innate CP, then where exactly do ad-hoc categories fit into this model? To me it seems that the boundaries for ad-hoc categories like “What I need to buy from the store” would be learned boundaries, possibly through direct sensorimotor trial and error, but more likely thorugh verbal instruction. Though, despite seeming like they may be learned CP, ad-hoc categories seem to establish even weaker secondary boundaries than other learned CPs. Despite non-sesorimotor categories like “justice” also being distinguished through similar methods, why is it that these categories are far less transient than the ad-hoc ones we seem to make all the time? Looking through the lens of neural nets, I wonder how these trial and error CP models can demonstrate the dynamic, changing nature of ad-hoc categories.
DeleteHi Ishan! I really like your question. From what I understood your question is why are ad-hoc category boundaries are so weak, and why are ad-hoc categories more "transient" (which I believe you are using as temporary) than other learned categories.
DeleteI don't think ad-hoc categories NECESSARILY have weaker boundaries. Consider case 1 you write the groceries list down: you are much less likely to change or add stuff or to affect the boundary of the category "what do I need from the store?". Consider case 2, you did not write things down but have listed the things in your head. If you were to buy things which weren't on the list, these are still actually things you need so one could argue they were part of the category all along but you forgot when you were making the list. Or if you do not buy things, it might be because of forgetting and not a boundary shift.
Another argument would be to use ad-hoc category's "transient nature" and say that since ad-hoc categories are constantly being created, you don't actually change the boundary when you add something to your cart which was not there but instead you create a new ad-hoc category "What do I need from the store right now?".
DeleteHi Emma, your two grocery cases comparison is very interesting. If I am understanding correctly, you successfully demonstrate an example of learned CP. That it is the creation of a physical list (containing language) enhances the boundaries of the ad hoc categories. But if we forget about buying certain things from a mental shopping list, won't that be removing features from the ad hoc categories thus results in a boundary shift?
DeleteAnd that to generalize, if I may, can we say that forgetting anything is to remove features from categories?
I agree with Emma that these ad-hoc categories don't necessarily have weaker boundaries than other learned categories. Another way of thinking about it would be that when you create ad-hoc categories such as "things I need from the store", you are identifying the features objects must have in order to fit into that category, namely, that you need them at that time. That category is at least as well defined as any other category we have in our head since we can identify the objects that belong and those that do not and we are able to do the right kind of thing with them (buy them from the store). Even if it didn't occur to me that something was a member of the category when I created it, I may come across a delicious looking apple in the store and realize it also has the feature that establishes it as an instance of the "things I need to buy from the store" category. Furthermore, I'm not sure I see these categories as "transient" other than the fact that the important features to establish something as a member of the category may have some reference to time ("things I need from the store today"). That category remains as is forever but you never have a reason to use it again since tomorrow is a different day and new features may pick out the things you need from the store.
DeleteStephen, I really like the part where you mention features for the category being that you need them at that time! Very nice way to put it into perspective.
DeleteThe discussion here about separation and compression reminded me of Hopfieldian neural networks, and specifically attractor dynamics in these networks. Attractor dynamics imagine neural activity (historically represented by firing rate in individual neurons) projected across a number of dimensions equal to the number of neurons being recorded from. The overall activity of the system is represented by a coordinate point. The system activity can move across all the available dimensions, but is constrained by neurobiological limitations, and shaped by the probability that activity will evolve to certain points. For example, in a simple 2 neuron system (consisting of neuron A and neuron B) that we can project onto a 2 dimensional graph, if both neurons are firing at 1 Hz, this may be a stable state (one which the system will remain at unless otherwise provoked). If neuron A starts firing at 2 Hz, this may cause the overall system activity to evolve, moving until both neurons are firing stably at 2 Hz. In this model, both 1 Hz and 2 Hz firing would be considered attractor basins (state spaces that similar activity patterns will evolve to if left unperturbed). I find this an interesting model to apply to categorization, as the idea of qualitative distinctions (e.g, our green vs blue example) seem to represent semantically and meaningfully distinct concepts, and are thus categorized discretely, while royal blue vs sky blue are not (perhaps representing edge cases in an attractor model, patterns that are distinct but both ultimately converge to the ‘blue’ attractor basin).
ReplyDeleteMadeleine, there are several ways to model CP, both innate and learned, with neural nets. Hopfield nets are one way; deep learning nets are another (illustrated in the image of the separating blue and purple dots at the top of this page). The winning model that can scale up to T3 has probably not been found yet...
DeleteThe idea that CP can be induced by learning alone, rather than us innately perceiving between-category differences in speech sounds seems more plausible. I once had the experience of a Hindi-speaking classmate try to explain that her name used a consonant sound that was halfway between a t and a d in English, rather than sounding like one or the other, but I had a very hard time hearing the difference between sounds she was easily able to distinguish. If these categories of sounds were innate based on our motor ability, we should be equally attuned to the differences in speech sounds, but because these are learned, our native language impacts our within-category compression and between-category separation.
ReplyDeleteAdrienne, actually the current view on phoneme production/perception is a bit more complicated.
DeleteThere is a young critical period for language learning after which the later-language learning never becomes as accurate as early language learning for most people, especially for pronunciation and perception.
We seem to be born with a full spectrum of phoneme feature-detectors/producers for all languages, but if your early language(s) don't use them, you lose them (as Chinese loses r and Japanese loses l).
(I personally love to try to pronounce the hard ones: for the d/t it helps to aspirate the d(h). Spanish pero/perro is a good one too. Swedish sjuhundrasjuttiosju [listen to it in Google Translate], a cross between sh and f, is highly recommended, as is the Japanese cross between s and sh, as well as between f and h -- ask Ohri to show you how to pronounce the name of Toshiro Mifune. Hungarian offers a wide variety of non-diphthong variants of vowels that Hungarians blithely and unmistakably misuse in English; then of course there's the abiding challenge of arabic kalb/qalb. And I can't leave out the most fun of all (but surprisingly easy) click-consonants. Yet their infants perceive and produce them effortlessly (once they get some motor control over their mouths).
Hi Prof, would this mean that babies are innately born with the full range of phoneme feature-detectors for language, but based on exposure to our maternal language, we “prune” certain phoneme detectors which helps us hone a specific language through the categorical perception of phonemes relevant to this language?
DeleteI'm wondering if there is a language that make use of all phoneme, such that its native speakers are capable of producing them even after growing up? Because so far it seems like all languages (that I can think of) don't use some of the phonemes
DeleteI think that this combination of innate knowledge of phonemes as well as the critical period in which synaptic pruning is able to weed out unused phonemes is very interesting. The capabilities for very young babies to determine what phonemes will be necessary to know based on the stimuli around them to me seems incredibly advanced. Does this critical period end before infants are able to successfully pronounce the phonemes of their language? Or before? If it ends after they are able to produce the sound, is this because the physical motor movements are necessary in order to categorize the phonemes that they must hold on to?
DeleteI want to add to this thread a bit more detail of the learning of language. As Prof. Harnad mentioned, we are born with the full spectrum of phonemes. This corresponds to Chomsky’s view that there is a universal grammar shared in all humans. I recall from a previous course that I took that infants have innate categorical perception that allows them to discern slight acoustic changes. This ability starts to lose when the infant is around a year old. After that, the infant starts to form speech-motor patterns and this persist throughout life and becomes one’s mother tongue. The second half of the first year of life is the first critical period of language acquisition. And this period does end before the speech-motor pattern is formed (so usually before one’s first word is spoken). Another critical period is the first seven years for life. This period is for the more general acquisition of language, and the speech-motor pattern of one’s mother tongue would influence the pronunciation of a different language.
DeleteThis reading, as well as 6a, made me think about machine learning and how some software programs are developed/created to try to solve the hard problem of cognition. I was wondering whether these programs will turn out to be enough to give us an answer as to how we can think and how we can form categories. In fact, these programs are man-made and even if they learn though supervised (trial and error with feedback) and unsupervised learning (although I think that the machine would have to be at least a T3 to learn this way because it would have to learn through exposure), isn’t there always a human being altering the program or making corrections to it so that it can improve how it learns? For example, if there is a computer program that is learning categories through supervised learning it could not do it unless it has a body or unless someone, a person, comes and inputs feedback into the program so that it can learn, but how does that help us understand what is going on inside our minds that allows us to learn? Even if the computer program is set (programmed) to go and acquire feedback by itself from the “web”, it was a human who wrote the code for that program, and it is the human that is allowing it obtain feedback from external sources.
ReplyDeleteValentina, your sceptism is understandable, but it's misplaced. Algorithms don't have to be written by humans; they can also evolve, in our DNA. And a gustatory robot could learn to distinguish the mushrooms on the island with the same neural net algorithm that learns to imitate click-consonants in Xhosa. Cogsci is trying to reverse-engineer gadgets that were forward-engineered by Darwinian evolution (Chapter 7), which is itself a form of supervised learning (closest to reinforcement learning). All causal systems are "machines": we're just trying to figure out what kinds of machines thinking, feeling organisms are. And algorithms are algorithms even if they grow on trees, silently, in a human-free world...
DeleteI found the distinction between within-category compression and between-category separation very interesting. One reason for this distinction is that we benefit more from knowing what to do with this kind of thing, rather than the difference between things we do this with. A real world example is the difference in categorization into terms such as sneakers and sweaters. Different languages have different distinctions, and even within English there is variation in the categorization of these items and definition of these categories. This article leads me to believe that this ambiguity is present because these categories do not determine what we do with these items. This is reinforced by the fact that there is an invariant distinction between pants and tops. We universally agree only on the categories with significant differences in features and function.
ReplyDeleteNico, I think I might understand some of this, but kid-sib doesn't. I didn't understand the sneakers/sweaters point, and could only barely guess what you meant about pants and tops... CP enhances the difference between members and non-members of a category: How?
DeleteI wanted to make my point more kid-sib friendly. What I meant was that CP enhances the difference between members and non-members using within-category compression and between-category separation. Meaning, members seem more similar, and non-members seem more different. These categories are often determined by function, so the difference emphasizes that there are different functions for the items.
DeleteMy clothes example was an extension of this. It was merely meant to consider that languages categorize clothes articles consistently for obviously different functions (tops vs bottoms), but the categorization can vary if the functions are the same or similar (english speakers from different regions can mean different things by the word “sweater”, or categorize shoes differently with “sneakers” and “runners”, etc.). This makes sense because within functional categories, things are more similar and it does not matter what the subcategories are. This is a bit of a convoluted example.
In this reading I found Professor Harnad’s discussion of Whorf’s hypothesis and categorical perception particularly interesting. Whorf’s hypothesis, as it applies color perception, is that “colors are perceived categorically, only because they happen to be named categorically.” Whorf’s hypothesis stresses the importance of learned language categories for our categorical perception of sensory stimuli.
ReplyDeleteProfessor Harnad concludes this paper by stating that some of our categories are not the product of direct sensorimotor experience, but rather language. These categories which are learned through language allow Whorf’s hypothesis to be considered, as they demonstrate the effect that naming and categorization have on our perception of the world. However, to conclude that the effect of language on CP is not purely a vocabulary effect, but a “full-blown language effect,” we should observe that how we categorically perceive the world are shaped by “what we are told about them.” This led me to consider how language is used from birth to shape our categorizations—for example, the verbal instructions/guidance of adults during infancy shape the categorizations children make and provide corrective feedback. How could we parse through the "vocabulary" vs “full-blown” language effect on CP?
(P.s. is this formulation of Whorf’s hypothesis, that “our subdivisions of the spectrum are arbitrary, learned, and vary across cultures and languages,” a strong or weak formulation?)
Shona, the Strong W/S Hypothesis is that language and naming cause us to see the rainbow. That's wrong; the rainbow is innate.
DeleteWeak W/S is CP.
I’m having some trouble understanding the “accordion effect” referenced in the section on evolved CP. If I’ve understood correctly, this is about the compression within categories, like in the ba/pa example. Each stimulus sound is like the ridge on the accordion fold, and when the accordion is pushed together, the ridges compress against each other, representing the stimulus sounds seeming more similar. Does the folded material between the ridges represent the variation between the stimulus sounds that gets ignored due to membership in the category? I’m confused because there’s also a reference to differences between stimuli from other categories being “expanded”; do stimuli from other categories exist on the same “accordion” then?
ReplyDeleteThe "accordion effect" in evolved CP is indeed a fascinating concept! I was slightly confused before reading your comment, but your analogy of stimuli as ridges on an accordion fold is, I think, an insightful way to envision this phenomenon. When the accordion is pushed together, and these ridges representing sounds within the same category compress, it accurately captures the idea that within-category differences appear minimized, causing these sounds to seem more similar. When it comes to the "folded material" between the ridges, I see it as the subtle variations and nuances within a category that can often be overlooked due to our perceptual bias towards categorical grouping (Zoe Yurman mentioned a great study on selective attention in 6a to demonstrate this phenomenon) It seems like we tend to emphasize the commonalities and suppress the minor differences within a category, which is an essential aspect of CP. It also seems more clear to me that when you pull an accordion apart, it represents the separation between these different categories. In other words, stimuli from different categories are perceived as more distinct or separate from each other due to categorical perception, and this is the concept of "between-category separation." My guess is that you could see stimuli from other categories as part of the same accordion: in that case, using same/different judgments and signal detection analysis would be useful to discriminate within but also “between categories”. (second sentence in the section on within-category compression and between-category separation)
DeleteLiliane & Mamoune, the accordion effect is just that equal-sized physical differences between colors look bigger, perceptually, than they do within colors. In learned CP this occurs as a result of learning, but instead of occurring along a sensory continuum like color, they occur in that have many features, with the ones that distinguish between the categories becoming more salient than the features that don"t.
DeletePullum is beyond hilarious. I thoroughly enjoyed reading The great Eskimo vocabulary hoax where he debunked the popular myth that Eskimos have extensive vocabulary to describe different types of snow (and snow in general) and explained how this myth became so popular. I was surprised that the New York Times, ''America's closest approach to a serious newspaper of record, had changed its position on the snow-term count by over 50% within four years. And in the science section''. It made me laugh, but to be honest, it's not funny at all that misinformation is being spread by experts in the field for the sake of sounding impressive. I remember being fascinated by this ''fact'' when it was mentioned by a teacher in high school. It's these type of 'fun facts' that made me want to study CogSci (language and it's influence on our perception of the world - Sapir-Whorf Hypothesis and the likes)! But, alas, it seems that I was told outright lies...Good thing I'm also interested by moral decision-making and multilingualism I suppose...else I would have had to rethink my whole career!
ReplyDeleteAashiha, it's not necessarily lies; it could also just be errors, incomplete information, wishful thinking or sloppiness. In the case of the W/S Hypothesis, the Strong one was wrong, the Weak one (learned CP) was closer to the truth. Scientific knowledge grows through better and better approximation; the antecedents are not lies, just not close enough approximations. Popular beliefs are another matter. (Look at the persistence of supernatural beliefs.)
DeleteIf I’m understanding correctly, this article suggests a possible hypothesis for the hard problem: Using sensorimotor categorical perception, our brains (machines) are selectively detecting invariant features of members of a category to perform within-category compression or between-category separation. This allows us to categorize our sensorimotor interactions into increasingly abstract representations. Further, we’ve discussed how language is necessary for us to acquire new categories without the need for infinite exposure. Except the article states that language-induced CP-effects have yet to be demonstrated in humans using computational methods: is this because of the Other-Minds problem? We can’t be sure that a person (or CPU) is properly perceiving a described boundary if they have yet to be exposed to it?
ReplyDeleteKristi, since that paper there have been computational models for CP. But CP is just an instance of the HP; it's certainly not an explanation. And in computational models it's not felt at all.
DeleteThe OMP affects every case of OMP (except for what is vulnerable to Searle's Periscope: what's that?)
The periscope – the only time for which the OMP would be circumvented because an implementation independent machine could, in theory, learn to associate Chinese symbols appropriately in spite of having zero understanding of their meaning. Thus, we still don’t know how categorizing / cognizing thru learned CP or language provides the MEANING to the words we use…
DeleteI was wondering on the topic of learnt CP, just how much its effects can be pushed. I know it is nowhere near as powerful as our innate CP as mentioned in above comments, and in the reading. The reading concludes saying that the “language induced CP effects remain to be directly demonstrated in human subjects” but that if there were to be a legitimate Whorf hypothesis claim it would need to drastically shift our perception of the world beyond the way we name things. This got me thinking of gestalt principles and the gestalt illusions. I know in some way it is not the same, but I wonder how it fits into the hypothesis if we could prime the way people interact with the world. Although I am unsure if this is a moot point, it is just interesting think about how gestalt illusions may fit into the Whorf Hypothesis and if its grounds for any further investigation.
ReplyDeleteEthan, which Gestalt effects do you mean, and how would you relate them to CP?
DeleteI mean the gestalt principles such as: similarity, continuation, closure, proximity, figure/ground, and symmetry & order. I suppose I would relate them to categorical learning as it is directly related to our perception of the world and are I suppose innate ways we categorize things that enter our visual system. A subset of this point is that the gestalt illusions related to this principle may help the case of the Whorf Hypothesis if they were on a big enough scale to drastically shift our perception of the world beyond the way we name things.
DeleteThe difference between innate CP and learnt CP had me thinking about savants. There are so many stories in maths, music, etc. about people who seem to just know things that others have to learn. Like someone who can pick out prime numbers because they 'feel different' (though for prime numbers most people never even get learnt CP). This seems like a case of some people having innate CP for certain features that most people don't. I wonder how/why this happens?
ReplyDelete(I love seeing my honors supervisor in the wild Fernanda does such awesome work)
I was similarly wondering about the way some children have seemingly innate grasps of logic and math and learn more quickly than others. The reading seems to suggest that the main categories we believe to be inborn are primary color perception and speech, and from an evolutionary standpoint, it doesn't seem like mathematics and abstract logic would provide much survival advantage. For those capacities to be innate on a biological level seems unlikely to me, though I suppose there could be a type of gene that would have this effect and that is only present in a rare few. But then I suppose we could find the "savant" gene, and that would be an issue for biology. Alternatively, could the development of a savant have more to do with their learned CP that they attain at a very young age?
DeleteIf I understood correctly, I believe the paper shows indirectly how cultural and environmental factors can shape our understanding and perception. As for innate mathematical and logical skills, it's a blend of genetics, environment, and early exposure. The evolutionary value of abstract logic might not be direct, but the cognitive abilities underpinning it could have conferred advantages in problem-solving or strategic thinking. While searching for a singular "savant" gene might be overly simplistic, genetics do play a role. Still, early experiences and environment remain critical in nurturing these innate tendencies.
DeleteMarie, Adam, Marie-Elise: savant skills seem to have more bearing on innateness and learning than on innate and learned CP. But who knows? Savant skills are not yet understood.
DeleteIt was mentioned in this article that categorical perception is defined both by within-category compression AND between-category separation. It was also stated that most categories have fuzzy boundaries. For example, there is a tendency to perceive several wavelengths of light as red and not distinguish hugely between them (compression), and there are wavelengths which are perceived as obviously not red, but the exact boundary between red and orange is not very clear. How does this boundary fuzziness fit in with the requirement for between-category separation in the definition of categorical perception? Why do we still call our perception of colour categorical, when intermediate points on the spectrum exist, and we perceive them as intermediates?
ReplyDeleteHi Aya,
DeleteTo answer your question "Why do we still call our perception of colour categorical, when intermediate points on the spectrum exist, and we perceive them as intermediates?” I think evolution and language have a huge role to play in it. Even though we can sometimes see intermediate points on the color spectrum, we tend to group similar colors into categories based on the evolutionary and cultural needs we’ve had. But I know that not all cultures/languages have the same categories when it comes to color perception. I also think that maybe it allows for more effective communication even if what we describe is not as "accurate" as what we see. Hope it makes sense, don’t hesitate to respond if you don’t agree!
Really interesting point, Aya! You reminded me of Homer and the Greek dilemma of colors, wherein scholars noticed Greeks never used words for certain colors--blue especially.
DeleteIn the case of color perception, our tendency to categorize similar colors may stem from evolutionary and cultural factors, aiding in efficient communication. As Lili mentioned, different cultures and languages may categorize colors differently, highlighting the role of language in shaping these categories. I think that this topic is a good opener for the extra reading on Eskimo snow vocabulary.
I am a fan of linguistic history, so I also wanted to mention that, for old English and many older romance languages, apple was often used as a general term for fruits, nuts, and berries. It wasn't until much later that categorization for these healthy snacks was differentiated when there was a need. An example that I like to point to is the French word for potato, "pomme de terre"--apple of the earth. I guess these modified words are similar to what led to the categorization of snow for Eskimos.
Hi Aya, I had a similar question, and I'm not sure I'm totally satisfied with the culture/evolution answer... It seems like the main reason we see color-perception as categorical perception (despite it being a spectrum) is that is /feels/ like there are many categories within it. Most continuous categories, like size, have two clear distinct bookends: big and small. Similarly, "Redness" could be a continuous category. Something like orange is somewhere in the middle of the spectrum of redness. To me, "color" seems to have many continuous categories within it, which is why my gut-reaction is that color perception is categorical.
DeleteAya, good points. But color and phoneme CP are atypical, because they involve a continuum. Sample a dictionary to see what proportion of content-words refers to a category that lies along a continuum, with a fuzzy boundary between members and non-members. When membership depends on discrete features, there are no fuzzy boundaries (except if there is no way to know what to DO or not-DO with members and nonmembers, in which case, for ordinary terrestrial cognitive scientists, there is no category at all.
DeleteLili, you are right that, with continua, our named categories are approximations, and the boundary is a peak of uncertainty (on which it is better if your life does not depend!)
Daniel, all humans have innate, universal detectors for certain peaks in the wave-length continuum, but not for snows, or mushrooms, which they must learn. Yes, historic changes in terminology may well have been accompanied by learned CP effects.
Elliot, all true, but the underlying variable with colors is wave-length, which is not like a fretted guitar string but a continuous cello string, and the frets are inborn regardless of how our language labels the notes.
I have so much to say regarding this reading !
ReplyDeleteFirst of all, I recently had a musical class, where we addressed the topic of categorical perception. And thanks to this reading, I understood the topic way better. In the class, it was mentioned that music, contrarily to speech, is not perceived categorically, because it is lacking the discrimination performance aspect to it (listeners are able to perceive sub-categories in timbres or pitches, and therefore can't discriminate categorically musical sounds, but rather continuously). Therefore, since music is classified as continuously discriminated, speech is the only one remaining as categorically discriminated.
Secondly, I have always been fascinated by color categories, and the experience of colors. And even if each human has exactly the same visual system and categorized colors in the same way (as mentioned in the text), can we really assess if everyone's experience of colors is the same? To me, color perception is limited by the OMP, since we will never be able to tell that my green is your green and his green, if I can't get in your and in his body, to compare.
Hi Juliette,
DeleteIt's really nice to see that the reading has resonated with your musical class and interest in color perception. When I read your assertion that the subjectivity of perception makes it difficult to explain one's own sensations of color, I started thinking about the philosophical issue called qualia. Qualia are the distinct individualized sensory experiences. For example, the intensity of color or the warmth of the sun. The challenge determining if one's subjective perception of a sensation corresponds to another is very interesting to me as well.
Hi Juliette, I think when you say “color categories” and “color experience” those are two very different ideas; if i understood correctly, color categories would determine the things we can do and see in the world; they allow us to do things with the right kind of things, and to a certain extent are innate but can also be learned. Further we can study elements of how humans acquire CP by looking, for example, how the way they perceive differences in color can vary depending on how well and in what way they learned CP (as demonstrated in the paper, how those that did not learn a category did not change their perception in comparison to those that did learn). In contrast, “color experience” implies how colors make us feel, which is something that is much more difficult to study, as you are right in that we cannot know how a color makes someone feel without actually being in their head (making this the hard problem). For color categories we may however be able to better predict how someone may interact with them based on what we know and have measured of the visual perception system, but also with insights based on the context in which a person was taught colors based on the weak whorf-sapir hypothesis.
DeleteVery interesting comments, thank you !! :)
DeleteIn the second paper it writes "Hence learning some categories does not generate category-specific CP, but merely an overall increase in all interstimulus distances whereas learning other categories does generate CP." Could we infer that the first condition describes continuous perception, and the second describes categorical perception?
ReplyDeleteAdding to the discussion of motor theory of speech perception, I think of the example regarding coarticulation, where the articulatory gestures for one sound influence those for adjacent sounds. In cases of coarticulation, the motor movements for a specific speech sound are influenced by the sounds preceding or following it, leading to a more continuous perception of speech. Additionally, in cases where people are exposed to non-native phonetic distinctions, their perception may not follow the typical categorical patterns, blurring the line between continuous and categorical perception.
Here is my first skywriting for Week 5; it seems like the blog has erased my posts:
ReplyDeleteProf. Harnad: And yet, with a Text Gulp much bigger than a dictionary, GPT manages to get a lot of mileage: How?
[Student Reply]
Prof. Harnad: (But no one has yet given an explanation of why GPT does so well..,)
Skywriting 1: One hypothesis we discussed in class offers a perspective on why ChatGPT does so well. Even though individual symbols (words) in GPT don't directly resemble their referents, the sheer scale of the "big gulp" seems to make a significant difference. It's as if a kind of syntactic shape, although not iconic in nature, emerges through repeated exposure to propositions with the same syntax. This syntactic structure of propositions appears to aid GPT in generating coherent responses. It might be that the shape of these propositions aligns with what holds true in the real world, reducing the arbitrariness of symbol-to-referent connections to some extent, and explaining the “cheating” aspect of the system’s performance. It's not just a product of statistical vocabulary but maybe, GPT learns which propositional structures tend to go hand-in-hand with specific words, providing a relatively grounded approach.
Professor Harnad also mentioned another hypothesis centered on Universal Grammar. I'm really intrigued to understand how this hypothesis might pertain to GPT's ability to manipulate language so effectively.
Natasha, good reflections. That propositional structure might be another kind of iconicity (a resemblance between the symbols' shape and their meaning) is just a conjecture now, but it could be tested. Can you think of ways?
Delete(But this would not be grounding; it does not help ChatGPT recognize a mushroom in the world, nor what to do and not do with it, or how. It just allows ChatGPT to TELL that to a grounded human or robot who already has grounded category names in its head, connected to their capacity to recognize and manipulate their referents in the world: Do you see the difference?)
The other conjecture concerns Chomsky's idea that (innate) Universal Grammar (UG) may not operate at the level of language syntax but at the level of thought. The sentences that violate UG may not be grammatical violations, rather, no thought would correspond to their grammatical form. (More about this in weeks 8 and 9. So since no text in its Big Gulp violates UG, ChatGPT does not formulate any unthinkable strings, and that may somehow cut down on its probability of error.)
Here is my second skywriting for Week 5, to which I've added a couple of remarks and questions based on the recent skywritings and replies:
ReplyDeleteSkywriting 2: Symbol grounding is centered on the ability to establish a connection between a symbol and its corresponding referent. Achieving this connection necessitates the capacity to identify the referent in the real world, emphasizing the importance of having sensorimotor capabilities. These capabilities not only enable the system to distinguish referents based on their sensory features but also allow it to interact and manipulate them appropriately, ensuring that it is doing the right thing with the right kind of thing, which explains why GPT with a camera and wheels would not do the job.
At the start of the semester, I had the impression that symbol grounding involved attaching meaning to specific symbols, implying the necessity for a sense of understanding, or a "feeling" associated with that meaning (sentience). I thought that was why symbol grounding illustrated the limitations of computationalism when attempting to reverse engineer the brain. However, following this week’s reading, it has become clear that addressing sentience isn’t within the scope of the SGP, but it comes back to solving the Hard Problem. Even with a T3 level computation, which requires symbol-grounding, we would not be able to fully assess whether there is “meaning” associated with the symbols – this goes back to the Other Minds Problem, i.e., one’s feelings are not accessible or observable to anyone but themselves.
Additional remarks: Even if we could access someone else's feelings, the fundamental issue of explaining how those feelings are generated would still persist. I feel comfortable with these topics now, but one of Professor Harnad’s replies is still unclear to me; “the reason the HP is hard is because of the solution to the EP (once it's solved)” – how does this relate to the previously stated reasons for the HP's insolubility? Another aspect I'm curious about is whether a grounded T3 system could rely solely on computationalism. It strikes me that if everything in its sensorimotor capacities were converted to symbols (similar to how our receptors convert input to electrical signals), it might be reasonable to imagine an algorithm allowing it to manipulate objects appropriately, assessing all parameters of these interactions through different types of sensors and converting them to symbols.
Hey Natasha! Do you think then a better understanding of CP could help bridge the gap between symbolic AI and sensorimotor capabilities, contributing to more advanced natural language understanding and generation? I know ChatGPT is more of a predictive model than an AI. Considering categorization and CP, do you think the development of symbolic understanding in AI systems like GPT? Curious what a fellow student has to say.
DeleteNatasha, your summary in your paragraph 1 is spot-on.
DeleteKid-sib does not know what you mean in par. 2 by "attaching meaning" to a symbol -- nor even what you mean by "meaning".
Grounding, as you describe in par. 1 seems to answer your question, but grounding only concerns DOing, and especially doing CAPACITY. Yet you're right that there's more to meaning than doing-capacity ("Easy Problem"). It also FEELS-LIKE something to MEAN something when you think or say it. That's more than grounding, but it's not something you DO, it's something you FEEL ("Hard Problem"). So "meaning" = (a) grounding (EP) plus (b) what it feels-like to think, say, understand or mean something in words (HP): to have it "in mind" (if I allow myself to use that weasel-word).
Yes, computationalism ("Strong AI") has at least two things wrong with it: (1) the symbol grounding problem and (2) what Searle showed with his OMP "Periscope". How did Searle show that cognition (e.g., understanding) could not be just computation? (It draws on what it FEELS-LIKE to understand or mean something.) I think you already understand this.
Par. 3: Why will solving the EP make solving the HP even harder? Because solving the EP means successfully reverse-engineering the causal mechanism that explains how and why we can DO all the things we can do. How/why explanation is causal explanation. Once you have explained everything that is observable (EP) there are no more causal degrees of freedom left for explaining how and why we can not only DO but also FEEL. This is not the OMP, it's the EP. Feeling looks like it's causally superfluous, once the solution to the EP has explained everything observable that we can DO.
For your last question -- could computationalism ground itself -- think of the differences between a computational simulation of a rocket launcher and a real, physical rocket launcher. The Strong C/T Thesis says that you can simulate all the properties of a real rock launcher computationally. But that isn't a real rocket launcher; it can't launch real rockets, any more than a simulated ice-cube can melt. By the same token, a computationally simulation of a grounded robot that passes T3 in a computationally simulated world (all squiggles and squoggles) cannot recognize or pick-and-eat a real apple. So it's not yet a solution to EP until you show that if you build a real T3 robot according to the properties in the computational model, it can recognize, pick and eat real apples, Turing-indistinguishably lifelong from any other one of uas, like Anaïs. But then that real T3 robot is not just computational.
Daniel, what do you think is the answer to your question?
Can categories, and their accompanying CP be acquired through language alone? I find this question very interesting yet I'm lazy so I asked ChatGPT. It said that CP can be influenced by language but not acquired through language alone. This relates to the idea that we can learn words like ‘institution’ by using other more ‘grounded’ words to help describe the word. We can then refine and expand our vocabularies, moving out of the kernel. We however can’t forget the non-linguistic factors that help us build our vocabularies like context, experience, sensory input, etc. We also use language as a tool to do the compression within categories or separation between categories.
ReplyDeleteFiona, ChatGPT has no basis one way or the other for telling you that. The question has not yet been tested, because the perceptual and electrophysiological experiments and the computer modelling have not yet been done.
DeleteCP is a perceptual, indeed an attentional effect. Up-weighting and down-weighting features can take the form of an attentional bias. If a note in a bottle floats up to the castaway's the mushroom island with just the words "the long-stemmed ones with the white dots on top are the poisonous ones", it would cause a perceptual change almost immediately (if believed -- and the default hypothesis, unless you know yoy are being informed by a repeat liar) us that what you are being told is true.
Perceptual biases are easy to induce indirectly through just words alone ("hearsay"). CP after learning directly through trial and error is a bias too, induced by the learned feature-detectors.
Attention is especially easy to bias. That's part of the power of language. We'll discuss that more in Week 8.
So don't use ChatGPT lazily! It will misinform you, and make you lose marks: ask for evidence, and look it up to check if it's right. (Do that with "Stevan Says" too; you're somewhat safer -- but also never certain -- when I don't say "Stevan Says"; and even with "Stevan Says," the best active-kitten strategy is to challenge it, and seek counter-evidence and think of counter-arguments.
The last part about the paper concerning Categorical perception and the Shappir Wolf hypothesis was fascinating! What stood out to me was how foundationally computational the explanation of the Language-induced CP is in neural network simulation: (= Booleans, higher order expressions…) It begs the question of how such seemingly algorithmic processing can be performed on an actual biological neural network, and can be linked back to the main question of the course: is consciousness computational?
ReplyDeleteAimée, a biological network is doing the processing when you are doing long-division in your head, or proving theorems.
DeleteBut neither category learning nor CP needs to be FELT (as we see we will see in the neural net models of it). So they have no bearing on "consciousness" (w-w) either way -- except we know that it really does feel different to see green or blue, or to hear an oboe or a clarinet...
Based on the article, it makes me confirm the idea that learned CP is more weighted than innate CP, and many categories are also gained through learned CPs. I think you can learn new categories simply based on language alone. We know every feature of a unicorn, such as the horse shape of the body and cone-shaped horn. And by doing the button-up process, we have our own interpretation of unicorns. It is not necessarily that everyone has exactly the same images because it is highly subjective in the first place. And you can never prove wrong until there is a real unicorn that does not look like the one you have from your own category in your head. So, people can simply learn new categories through languages anyway, and they learn them for achievements or certain tasks based on the evolutionary view.
ReplyDeleteIt is also interchangeable between categorical members and continuous members. For example, through repetitively learned error-correcting feedback, the separations within categories might be more and more distinctive. This is why painters usually have a better sensitivity to slight color changes than people who rarely know drawing in general. People can be trained to do the opposite way, which convinces me that by focusing on the functions of learning those categories, and the moment that I successfully use the new category to achieve something, that is the moment of mastering this new category.
Eugene, good observations, but don't conflate category learning and CP: What is the difference? Category learning need not induce CP.
DeleteIf I understood it correctly, everyone has some sort of innate feature-detectors which can be influenced through learning, for example babies having a full range of feature-detectors and are able to absorb new language rules with ease, but they slowly lose this ability and only retain some of it depending on how they are raised and what they learn. This can be explained by the changing neuroplasticity as we grow older. However, just as a few comments above have mentioned, I wonder what is the explanation behind the differences in the degree of learned vs innate categorical perception from person to person instead of age - some things are intuitive to some people while others have to train harder just to be able to grasp the same thing. Is this determined by our genes, or is it related to the hard problem and cannot really be explained?
ReplyDeleteAndrae, the case of having inborn feature-detectors that are lost if you don't use them iwithin a critical period is not universal in category learning! It is a special effect in the case of important capacities like language (as well as imprinting in ducklings (Chapter 7). We'll discuss the importance of some genetic variation then too.
DeleteI see! Are some motor skills and navigation skills also examples of innate feature-detectors that are lost if not used within the critical period?
DeleteI am interested in the following sentence from this article: “the direct function of categorization is to differentiate members from non-members, so as to be able to do the right thing with the right kind (category) of thing. This requires selectively detecting the features that distinguish the members from the non-members and ignoring the features that do not distinguish them” (12). It reminds me that recently, I learned similar contents in the child development course. It's about the way infants categorize things. To determine the type of an animal, if an infant needs to distinguish between two animals, such as an elephant and a tiger, they will think that these two animals have limbs, and that means they belong to this category of animals. But when distinguishing between fish and elephants, they can't tell whether fish are animals like elephants. Because they are used to animals having four limbs, but fish have no legs,and then they get confused.
ReplyDeleteSo this raises a key point in the discussion, when classifying members and non-members, we need to make a series of distinctions selectively when we do categorization. It is often easier to compare the differences between two things than to compare the similarities between two things.
For example, in the previous infant example, when distinguishing between a fish and an elephant, the biggest difference between them is that one has legs and the other does not. Then it is easier to categorize them rather than seeing if they can breathe or if they can move.
Siyuan, similarity and difference are complementary. I'm not sure if one is easier to learn than the other. But it's true that the emphasis in category-learning is on differences (between-category separation), because we have to learn to do different things with different kinds (categories) of things. The complementary effect of within-category compression may just be because the attentional bias is toward finding features that distinguish members of different categories, rather than toward finding differences within categories.
DeleteI have a few comments on the section on the motor theory of speech perception. Like Kristie's comment above, I also thought of coarticulation when I read this section, and I want to add that virtually every phoneme that occurs in natural speech is influenced by the phonological environment it occus in and no two instances of the same phoneme in different environments will be acoustically the same. For example, a vowel will be produced more nasally (the tongue body will lower from the velum) when there is a nasal sound (like m, n, ng) that occurs even multiple syllables away in the same word or even in a subsequent word. For this reason, speakers of different languages will categorize certain sounds differently, which suggests the non-existence of objective phonemes and rather that phonemes are "arbitrary points along a continuum" (that is, each individual subconscioucly sets the conditions for the membership and non-membership of acoustic signals to phonemes arbitrarily). For instance, Portuguese distinguishes between nasal and non-nasal vowels and would thus not perceive vowels that have been nasalized to a certain extent by coarticulation as the same as non-nasalized vowels; Portuguese speakers would consider them completely different phonemes, English speakers would not. And the same is true for speakers of the same language: ask two English speakers to say whether they hear "ba" or "pa" for one hundred recordings of bilabial (p/b) sounds that each have a different value for voice-onset-time and these two speakers will almost certainly not give the same answer for every recording. "Categorical perception", then, must be subjective, and no two speakers of English will converge exactly on the change from ba to pa DESPITE the fact that for each speaker, the change will be abrupt. I think this serves as evidence that our perception of the world really is defined by discrete categories that are not objectively out there in the world but rather set (albeit partly by experience) in the architecture of each individual's mind.
ReplyDeleteHi Jordan, your argument is very well constructed, and leads to a pretty logical conclusion. However, I would like to nuance it by saying that CP can't be simply subjective because of how speech perception (eg. vowels) works intrinsically. As mentioned in the paper, categorical perception requires both discrimination (capacity to distinguish, with the presentation of a series of elements) and identification (capacity to categorize without the presence of a referent, as mentioned in the paper). These processes occur simultaneously when any vowel is presented, and leads instantly to the categorization of these vowels (/d/ vs /t/ vowels). This mechanism, occurring in the brain, does not vary among individuals, it usually works the same way for everyone (discrimination + categorization), which makes it quite objectively constant. This leads me to add to your point, arguing that CP is not simply subjective, but rather a combination of both subjective and objective mechanisms.
DeleteThis comment has been removed by the author.
DeleteI found the discussion about the Whorf Hypothesis and its implications for our understanding of color perception interesting since I also had prior knowledge about this topic from my previous psyc classes. Is it possible to acquire categories and their associated categorical perception solely through language? Harnard highlights that although the direct demonstration of language-induced categorical perception effects in humans is still lacking, there is proof of the impact of naming and categorization on our perception. In order to show that the influence of language on categorical perception extends beyond mere vocabulary and constitutes a comprehensive impact of language, it is necessary to demonstrate that our categorical perception of the world is influenced by not just by how things are named but what we are told about them.
ReplyDeleteThis is not a question or comment about the content of the reading, but the discussion of categorical colour perception made me think of a pretty cool 2009 study I read about that showed evidence for linguistic relativism (or rather, the weak version of Sapir-Whorf hypothesis: that there is a relative change in cognition depending on language). In brief, the study looked at colour perception in Native greek versus native English speakers, as greeks have a word for dark blue and light blue, whereas english only has one word for blue (I just saw in the comments that this is also seen in Russian (and Hungarian, for red), and probably in a lot of other languages). Essentially, the results showed that native Greek speakers had a significantly greater and faster ability to perceptually discriminate these two colours. Although obviously, English speakers can look at the two shades of blue and perceive the difference, there is some relative difference in perception between greek and english speaking subjects, simply because they had two linguistically separate categories for what was being tested. Now, this doesn’t prove the strong Whorfian claim that colours are perceived categorically only because they are named categorically (linguistic determinism), but does show some evidence that language guides our perception to an extent (linguistic relativism, AKA learned CP). I'm not saying anything new here, but it was cool to see the theory proven in application!
ReplyDeletehere's the actual study and results: https://www.researchgate.net/publication/24037812_Unconscious_effects_of_language-specific_terminology_on_preattentive_color_perception
This reading - specifically our categorical perception of a continuous wavelength of color and the distinction from directly grounded categories with abstract categories - inspired a ‘theory’ of the world as a hierarchy of categories: The natural world according to physics is described by continuous categories (the electromagnetic spectrum, sound waves) (and perhaps categorical quantization of energy levels in atoms). Within this world chemistry perceives the categorical elements seen on the periodic table (and other things like the states of matter). Biology evolved to present itself and to perceive a huge number of categories, with an incredibly diverse and complicated set of organisms and things they do. With language, humans have broken free from learning only directly grounded sensorimotor categories to an infinite possibility of abstract categories (the social science level of this hierarchy!). If this is true, does this mean that categorization (and therefore consciousness) is all made up, invented by Darwinism and language? So what? I don’t think there’s any insight to this.
ReplyDeleteWhat really stuck out to me in this text is The Whorf Hypothesis, as it provoked a bunch of thoughts and questions about language and cognition. I was wondering, if language can influence the way we categorize, for example colors, does it also influence the way we think, perceive and interact with the world around us? I mean in the grand scheme of things, I remember taking a course (PSYC 213 and 215) which explored this concept, where for example languages (such as spanish and russian) that operate heavily based on gendered nouns tend to produce a way of thinking, in which policies that are regressive and very discriminatory are more prevalent in the countries in which those languages are more prevalent. Does this idea of language influencing thinking also apply to non gendered languages to the same extent?
ReplyDeleteI also find the Whorf Hypothesis extremely interesting! It’s crazy to think about how the use of language can alter how individuals perceive themselves and the world around them. A friend actually recently told me that they noticed in class that their professor made a point to say “girls and guys” instead of just saying “How are you guys doing?” or during a presentation, a student made a point to say “saleswoman” instead of “salesman”. We see these gendered terms all the time, like “policeman” or "fireman” and it is interesting how changing these traditional terms highlights how deeply embedded they really are in society. The use of language can perpetuate these sexist ideologies and change the way society views different individuals.
DeleteHi Megan,
DeleteAside from gender too, I was wondering how the different terms we use for certain objects/people/profession matters too. If I call a policeman a cop, does that change anything? If I call a fireman a firefighter, does that mean anything different? If I have more than one category of a certain color instead of just saying for example blue, does that seep into my daily life in other areas? Will it change how and why I categorize things the way I do? It is so interesting to see the impact of language on cognition and vice versa!
The talk about an object’s membership degree in a category reminded me of the distinctions between a prototype, an exemplar, and an ideal of a category. The prototype approach is illustrated by the “birdness” example, where some birds are considered more central to the category. Exemplar theory compares a new object to the previous objects stored in a category, while the ideal is the perfect object pictured for a category.
ReplyDeleteWould you say that learning or language CP (not innate) could be related to sparse-coding or synaptic pruning? With sparse-coding and the neurons firing becoming more sparse, this could be occurring simultaneously with the compression and differentiation of CP. We pay attention to the salient invariable and separating features while ignoring the smaller similarities (between two categories) and differences (within categories). Therefore, we might end up with less neurons being fired (fewer features to detect) and more storage capacity for more categories (continuous learning throughout life). It also lines up with synaptic pruning in childhood development, less connections are necessary because the child has already categorized much of the world it has been exposed to.
ReplyDeleteNatasha, yes, innate-features could already be picked up by unsupervised learning, plus a little supervised learning to guide us in what to call them.
ReplyDelete"Top/down" vs "bottom-up" is a bit of a weaselly area. Verbal = top-down and Sensorimotor = bottom-up? Maybe. But very fuzzy. What's clearer is that direct grounding is bottom up.
This comment has been removed by the author.
ReplyDeleteCategorical Perception (CP), which stems from psychophysics and auditory perception, is the shift in one’s perception as a result of learning, in which objects in different categories seem more different and/or objects in similar categories look more similar, before and after learning the distinguishing features of a category (differences are “compressed” within categories and “expanded” between-categories). CP can be innate or learned, yet innate CP is quite rare and most of CP is due to learning. I remember that a couple of lectures ago, we briefly talked about how back-propagation allows neural networks to distinguish features, in which it acts as a reinforcer where the connections are strengthened (if it gets you to the right place) or weakened (if it gets you to the wrong place). Since I don’t have a computer science background I am struggling to understand this idea (plus the “hidden-unit representation” part in the reading), and I was wondering how this will relate when regarding a neural network model for CP. When a neural network learns a category, how does CP occur, and does back-propagation use mostly trial-and-error to selectively detect distinguishing features?
ReplyDeleteIt was interesting to learn how categorical perception is formed via an “accordion effect” whereby there is a compression and separation of features in comparison to a neutral measure of the stimulus being studied. I find it interesting that this compression/separation of our perceptions to allow category perception is biased by evolution. The article mentioned that colour and speech/sounds stimuli specifically are “warped” in this accordion-like manner by evolution in humans, but I’m wondering whether there are any cross-species patterns in the types of stimuli that are warped by evolution? If we experience some innate warping of our perception of speech sounds, is this warping exaggerated in bats, from whom sound categories are critical to being able to fly around?
ReplyDeleteCategorical perception, or the idea that within-category differences are compressed and between-category differences are expanded, resulting in a fundamentally different perception of the world and the relatively abrupt perceptual change at category boundaries, is immensely interesting. Without categorical perception, a rainbow would not look to be comprised of such distinct bands but rather as a continuous or gradual change in color. This immediately got me thinking about the universal linguistic ability of newborn infants, which is where they can, among other capabilities, distinguish phonemes in any language on Earth. However, as they age, they lose this ability and become sensitive to phoneme distinction in their language and insensitive to that in other languages.
ReplyDeleteSpecifically, I’m thinking about how this relates to the discussion of speech perception in the reading for this week. The paper mentions that speech sounds lie along a continuum of voice-onset-time, but nonetheless, speech perception of this continuum is such that perceivers hear distinct sounds (i.e., one sound or the other) rather than the entire continuum. Perhaps the loss of the universal linguistic ability in infants (i.e., distinguishing phonemes and speech sounds in any language) is a matter of progressive phonemic/sound category learning in your native language and not in other languages. More specifically, in the infant’s native language, phoneme/sound categories are learned, based on sensory interactions with the infant’s environment (in which only their native language is spoken), which results in categorical perception and the effect of perceiving a sound continuum as distinct sounds that can be identified. In the languages that are not the infant’s native language, phoneme/sound categories are not learned, and so no categorical perception is developed and the sound continuum is perceived only as a continuous signal that cannot be broken down into distinct identifiable and categorizable sounds.
In categorical perception (Harnad, 2003b), what I found the most interesting, although only briefly mentioned was the fact that categories determine how we see and act upon the world. Which, as said in class, is the basis of cognition. This sentence alone made me rethink about how cognition really is categorization. If the way we go about the world is based on sensory integration, this integration is actually attribution, based on similar or different sensory features, of certain things to a certain set of things which allows us to do specific things with specific kinds of things. This means that because we are able to categorize, properly use the objects around us, recognizing that they can look different and be distinct while serving the same function.
ReplyDeleteAn interesting theme I found in the “Categorical Perception” reading (Harnad, 2003) was how a significant amount of perceptual abilities in humans and living organisms are able to simplify and change how we view and interact with the world. More specifically, there are two main forms of perception that this paper focuses on: (1) categorical perception, which forms discrete categories, and (2) continuous perception which is more so like a gradient. As sensorimotor systems that interact with a complex world, however, humans would require both of these types of perception.
ReplyDeleteI find that the way in which neural structures in living organisms are anatomically structured to accommodate for both these forms of perception to be the most fascinating. For example, in neuroscience, there is population coding and labeled-line coding that are used to analyze stimuli features in the environment. Population coding COMPARES relative neural activity across different neurons, and therefore contributes to continuous perception. Labeled-line coding is a system where one neuron will fire to only one specific stimulus, therefore it's more so related to categorical perception (the “all-or-nothing” neural response). Could these sensory systems serve as a sufficient neural model? It was highlighted in the “Computational and neural models of Categorical Perception” section of the reading that not much is known about the neurocircuitry regarding this (Harnad, 2003) — however, since this reading was written in 2003, I’m wondering how much research has been done from this perspective since then?
i think the Whorf Hypothesis is so interesting. The fact that the language you speak affects how you process reality is so cool to me. Differences in grammar do correspond to different ways of thinking, it forces you to think about different things. Turkish has a suffix mis that you put on verbs to report things you did not witness personally. In that sense, you always state your degree of subjectivity and this suffix does not really have an equivalent in english. It could be translated to it seems, but it does not that the same effect.
ReplyDeleteCool fact Marine! I did not know that about the Turkish language, and honestly that would be pretty useful in english. The Turkish suffix "mis" is a perfect example of how language can impact the way we express our experiences. I agree that the Whorf Hypothesis is interesting in its outlining of the effect grammar has on how we perceive different things in the world. It's a reminder of the intricate relationship between language, culture, and cognition. The way different languages handle nuances like subjectivity and perception can truly shape our cognitive processes. Exploring these linguistic differences can open our minds to new perspectives and a deeper understanding of how we perceive the world through language.
DeleteCategorical perception is fascinating to me, and I wonder a lot about the moments when it isn't there, Sartre's chestnut tree and such. Of course in the day to day it's incredibly important, one of the main avenues of uncertainty reduction, but there are ways around it.
ReplyDeleteA classic is to stare at some painting until all the shapes fall apart (some forms of meditation work on this) or staring at your own face in the mirror until you can't recognize yourself (that's where that bloody Mary thing kids do to each other comes from), it takes a while but these things can give a glimpse into that "blooming, buzzing confusion", the visual perceptual equivalent of repeating a word until it loses meaning.
I want to confirm if i understand the concepts of within-category compression and between category separation. From my understanding, within-category compression means that within a specific category, the perceived differences between items or stimuli are reduced, so items of the same category are perceived as more similar to each other than they actually are in physical terms. Whereas, between-category separation is the opposite, items from different categories are perceived as more different than they actually are in physical terms. However, I am having a hard time thinking of different examples where this would occur…
ReplyDeleteHi Maria, I also had a bit of a difficult time trying to think of other real world examples. Within-category compression involves item perceiving within the same category, where the differences that are perceived are more similar to each other than they actually are. Between-category separation involves item perceiving from different categories, where the items are seen as more distinct than they actually are. Within-category compression can be observed in music genres. People often perceive songs within the same genre as more similar, even though there can be significant variations. In contrast, between-category separation can be seen in the perception of animals. We might think that a cat and a dog are more different from each other than they actually are because they belong to distinct categories, but their biological similarities are notable. Another example of within-category compression would be colour perception, and another example of between-category separation would be car types (i.e, pickup trucks vs sports cars). Hope this helps!
DeleteIn this reading, Harnad’s breakdown of categorical perception is clear and thought provoking. I am totally onboard when it comes to the empirical logic surrounding both the innateness and learned components of categorical categories, as there seems to be a conceivable avenue in which they can be studied relatively objectively (through neuropsychology, etc.). Where I am having a hard time following is how there is utility of continuous categories in relation to what we are trying to figure out in cognitive science (namely, how it is that we are doing what we are doing). As Prof. Harnad has said before, categories like ‘justice’ or ‘beauty’ are on a sliding scale, and their boundaries lack binary distinctiveness. I would argue that not one given individual would put the same things (in absolute) in these categories as another, while this is much more conceivable when it comes to categorical categories where distinguishment is predicated on more sensory-motor, less metacognitive processes.
ReplyDeleteSo, I am curious, if it is perhaps the case that we cannot distinguish parameters for some of these continuous categories across individuals, does it have utility in the scope of what cognitive science is trying to achieve?
I was thinking in terms of forming abstract categories such as goodness, would the categories of positive or negative feelings be an innate quality that accompanies learned categorization? For example, in a still-face experiment, a mother smiles and interacts with her infant and the infant reacts in a playful, curious manner during this interaction. When the mother stops smiling and appears emotionless, the infant begins to struggle and cry, and experiences distress. Without having much exposure to the world yet, the infant seems to categorize the behaviors of others into categories of positive–good and negative–bad. In this case, the act of smiling is demonstrated as an innate categorization of good. So within-category differences here would be emotions and between categories would be good and bad. The infant is also innately capable of producing behaviors such as smiling and giggling, which is accompanied by learning CP based on the mother being responsive or not to their needs over time (also an evolutionary quality).
ReplyDeleteHi, I like the idea that abstract categories like 'good' and 'bad' stem from positive and negative feelings. I know Hume has already posited the idea that moral intuition is mainly based on feelings, and I think there is a good deal of evidence for that. So, the 'good' and 'bad' categories might be learned categories that developed from the innate positive and negative categories of feelings.
DeleteBut I don't think the experiment you mentioned shows that the infant already has categories for 'good' and 'bad.' It's clear that the child recognizes the emotion conveyed by the mother and reacts to it appropriately, but that doesn't require having conceptual categories for positive and negative. The infant can be able to distinguish between the emotions without having them categorized as good or bad.
I do think that children gain a Manichean worldview (seeing things in terms of 'good' and 'bad') very early as an easy and often effective way to understand the world. This 'good' and 'bad' perspective may persist in many domains until we develop a more comprehensive and nuanced understanding of it.
Categorical perception (CP) can result from inherent traits or be induced by learning. It's intriguing why distinctions within categories appear minimized, while differences between categories become more pronounced. The language-induced CP section reminded me of a theory that suggests linguistic influence: languages with limited color terms may cause people to perceive similar colors as a group. But what about innate CP, driven by sensory detectors? It raises questions about how newborns perceive differences. If we could access their brain responses, would they find greens and blues more alike within categories and distinguish them more distinctly between green and blue categories? This aspect of human perception would give insights into innate categories.
ReplyDelete'Categorical Perception' by Prof Harnad delves into the diverse theories surrounding categorical perception and discusses regarding their nativity or variability. It touches on the Whorf hypothesis and language induced CP, though it's relevant only to certain category types, prompting questions about how CP filters our reality. While we can ascertain Whorf's hypothesis's validity through reverse engineering, it's largely cultural and qualitative for humans. The origin of linguistic categories is intrinsically societal, which seems to circle back to the intricate issue of consciousness and a phenomenological reasoning dismissed by cognitive science. Maybe through guided learning with genuine limitations, we can approach a better understanding of human interaction with cultural and linguistic factors.
ReplyDeleteThis article clarified many aspects of categorical perception for me. In the last paper, I was confused at how the rainbow was noted as an example of CP, because although we can distinguish the ROYGBIV, there is also a color spectrum, so the explanation of a mix of categorical and continuous perception really settled my confusion.
ReplyDeleteThe effects of language-induced CP are quite interesting. I had taken the amount of information we are able to convey and understand through language for granted, but breaking the abilities down like this unveils the intricacy of how we are able to use language to communicate, and without our unique cognitive abilities (of course, rooted in sensorimotor grounding) it would not be possible.When direct sensorimotor observation isn't an option, we are able to understand via language its features, and then abstraction have an internal understanding of what it is, which is quite a remarkable capability.
“Categories are important because they determine how we see and act upon the world.” This sentence really stirs some feelings in me, as I am thinking, if we are not to interact with certain objects our whole life, we will never enable our capabilities to do things with them. For example, we won’t be able to produce certain phonemes if we don’t speak certain languages, but we all possess the same capability in making all phonemes at birth. So, this makes me wonder, are there other capabilities we are not yet utilizing fully? What is human’s potential in cognition?
ReplyDelete“Perhaps, then, it is an innate effect, evolved to "prepare" us to learn to speak.” This quote instantly reminded me of my skywriting on 6a when I discussed how while I thought some categories were innate, I think it is more likely for the tools to be able to create categories are innate rather than the categories themselves. This shows that our brains have the ability to do many things innately but we need to be taught and exposed to society for the action to actually occur. This also connects to how babies often have certain time frames and levels to learning and producing speech since other factors need to be triggered slowly but surely to unlock the use of this preparedness. The portion in Acquired distinctiveness about chinchillas confused me a bit. From my understanding he is explaining that speech CP is not special on its own, but it is special due to it being triggered by other factors. So if chinchillas had these other factors would they be able to speak?
ReplyDeleteIn the section “Language induced CP” I found the specific point that after a set of category names become grounded, they can go on to produce ‘children’—take the example of mother: woman = female & human, mother = female & parent. I find this system of inheritance intuitive in relation to cognition—it is easy to see how many of our constructed categories could be made this way. Reminiscing on Section 30. of the last reading—categories guide how we interact with various things, shaping cognition—it seems that our learning toward and interactions with various things also determines the shaping of categories.
ReplyDeleteI read the 2019 paper and found the reading use neural network technique to express learning. In successful learning of new categories, our within-category perception gets heightened. In computational terms and from the perspective of unsupervised learning, whenever a new category is learned, the distance between this category and other clusters is restructured, and when the overall space remains constant, the addition of categories effectively translates to increased accuracy.
ReplyDeleteIn essence, learning is a step-by-step classification process that comprises continuous reorganization of increasingly finer brain space through unsupervised learning.
However,brain's storage mechanisms will be concerned. Unlike computers, which retain each point(features) during unsupervised learning, humans tend to forget learned features over time especially the complex features as used in the paper's experiment. If forgetting is inevitable, the challenge becomes how to ensure that newly learned categories are accurately assigned since previous categories may lack of exsisting "remembered"feature, thus the category becomes unreliable?
I was very intrigued by the Whorf Hypothesis. As a kid, I was curious about how others saw the world, especially in terms of colors. There was and is no proof that the color I see as red and the color that, say, my mom sees as red is the same but we both categorize it as red because whatever color theatre curtains are is red. A similar categorization to me is tonality in speech. Which is really important in some languages to convey the meaning of the word like Mandarin and very important to convey emotions in other languages like Turkish. Differentiating these tonal changes is related I think. If someone’s voice is bright and loud we’ll say that they are excited but if their voice is serious and loud we’ll say that they are angry. A recent experiment that I participated in asked me to place how a person was feeling based on their facial expression. I was given a quadrant with pleasant/unpleasant on the x-axis and expressive/unexpressive on the y-axis. I had to place a dot on the quadrant based on the given pictures of faces. I was basically categorizing the facial expressions on a spectrum. I have not been given the debriefing but I believe that the experiment was testing something about categorical perception. I wonder if this experiment was done with sounds instead, and was given to people grouped by their native languages, how it would help us understand whether the categorization of tonality is innate or learned.
ReplyDeleteThroughout this week’s reading, I was fascinated by the Whorf Hypothesis in its efforts to convince the readers that naming and categorization alone can warp our perception of the world. This got me thinking more about how the different languages that I speak (English, French, and Korean) not only differ in their abilities to convey a certain message but also make me speak and even behave in ways that conform to the cultural values of each of the places where the languages are spoken in. This might be a bit confusing to understand but I find often that when I am speaking Korean, regardless of who I am speaking to, I am more reserved and careful with the choice of words that I use. Although this can be partially explained by Korean culture’s need to speak formally, even with close friends I still find myself using words that are more vague and abstract language as opposed to words that are more direct and “confrontational”. On the other hand, when I am speaking English or French (mostly English), I find myself using words that are much more to the point and less “beating around the bush”. Even if we were to talk about the same things in both Korean and English, I feel that both of us would have very different perceptions of how each conversation went. Furthermore, regarding words that do not exist directly in each others’ languages, it becomes hard to even convey the proper content of the message itself. For example, in Korean there exists a word to describe certain types of food that make you feel a certain way. The closest English translation is for the food to be “greasy”, “oily”, or “heavy”. However, none of these words to me convey that EXACT feeling of what it is in Korean. I have tried numerous times to describe it to my friends and such but to no avail. I think this example provides a potential support to the Whorf Hypothesis and its rejuvenation.
ReplyDeleteWhile I was reading the paper on Categorical Perception, this part came very interesting to me. It states, “Categories are important because they determine how we see and act upon the world. As William James noted, we do not see a continuum of "blooming, buzzing confusion" but an orderly world of discrete objects. Some of these categories are "prepared" in advance by evolution: The frog's brain is born already able to detect "flies"; it needs only normal exposure rather than any special learning in order to recognize and catch them. Humans have such innate category-detectors too: The human face itself is probably an example”. This got me thinking about Prosopagnosic patients. Since prosopagnosic patients have difficulty recognizing faces, could that mean that they could have problems associated with their innate categorization as well? I researched some papers regarding this question. In the paper “Face perception and within-category discrimination in prosopagnosia” written by Martha J. Farah, Karen L. Levinson, Karen L. Klein, they state that prosopagnosic patients have difficulties in identifying living creatures’ faces, but do not have much difficulty in the recognition of common objects. (https://www.sciencedirect.com/science/article/pii/002839329500002K?via%3Dihub)
ReplyDeleteIn the “Language-induced CP” part of the reading, it discusses how there are two types of Categorical Perception (CP), which include innate CP and learned CP. It mentions a question, which states, “Can categories, and their accompanying CP, be acquired through language alone?”. Moreover, it mentions, “How many of us have seen a unicorn in real life? We have seen pictures of them, but what had those who first drew those pictures seen? And what about categories I cannot draw or see (or taste or touch): What about the most abstract categories, such as goodness and truth? Some of our categories must originate from another source than direct sensorimotor experience”. This got me thinking if we can really categorize with language alone. Personally, I do think that as long as we are born with the necessary innate CP, we can categorize perceptually with language alone. In the case of children, this may be challenging, since they do not yet have the common categories learned, which would limit them from doing CP alone with language (with no previous visual or experience, it's challenging to perform CP alone with language). However, with adults, since we already have a pre-learned set of categories through experience, I believe we do not need to experience something in order to do CP.
ReplyDeleteThis text reminded me of embodied cognition theories. When it comes to the CP effect between abstract categories, it is possible for this effect to be driven by the physical sensations that are linked to the features of abstract concepts. Take the example of a Unicorn. It is understood as a sort of magical horse with a horn in the middle of its head. Therefore, to understand this category, one would attribute to it a significant amount of physical sensations that are also associated with horses. If we assume that there exists a set of all the physical sensations that can be perceived by our sensory-motor apparatus and that any abstract concept, at its core, is built upon varying arrangements of members from this set, then it is not surprising to have CP for abstract categories. Obviously, the more experience one has with a category, the more refined the boundaries of that concept become, and the perception of the category becomes more clear-cut (i.e., the higher the resolution needed for the borders between concept to become blurry )
ReplyDeleteI wanted to bounce off the fact that the professor mentioned critical periods for language acquisition. Critical periods are a time in which the developing brain is most sensitive to certain stimuli, where learning about a certain kind of thing is especially 'easy' for a growing organism. In this sense, the innate/ cultural distinction complexifies a little: while the ability for the brain to enjoy higher plasticity is innate, as the timelines for critical periods are generalized across individuals of a same species, what the organism absorbs in response to its environment is varied. To address the last part of the paper, say there was a critical period for learning about what 'goodness' is (and this is reminiscent of Erikson's stages of development), the child will give more salience to everything 'goodness' related in their environment, and using adult guidance, narrow down (compress what 'goodness' might represent into its category and separate what goodness isn't from that category) on what goodness has been made to mean. Although the idea that there might be an innate critical period for something as abstract as this is hard to imagine, given that our moral compass usually has little to do with our survival and reproduction capabilities, this is an interesting thought experiment that has, for me, shed a new light on the concept -or the category- of categorization.
ReplyDeleteThrough our discussion of categorization being doing the right thing with the right kind of thing, I cannot help but think a bit about natural kinds. Natural kinds are important to science. Plato describes them as cutting nature at its joints, natural kinds help us categorize what there is in the world. And so defining natural kinds helps us to do the right thing with the right kind. But aside from how they're normally thought of by realists as microstructural markers of a category and its borders we can also think of it in more simple way as just the subjective difference between pa/ba.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis reading on categorical perception was captivating. The concept suggests that we see things as more similar when they belong to the same category. For example, consider the colors blue and green. If you have two shades of green, g1 and g2, that are as far apart on the color spectrum as a green and a blue, say g2 and b1, people will still rate the two greens as more similar (perceived ||g2 - g1|| < ||b1 - g2||, but in reality they are equal). I find this quite remarkable.
ReplyDeleteOne important question raised is whether categorical perception is learned or evolved. In my opinion, it's a mix of both, and wanting it to be only one or the other seems misguided. We have evolved with three types of color-sensitive cells in our eyes, known as cones, which likely provided an evolutionary advantage for distinguishing between different shades. For example, if we're naturally repulsed by the texture of rotten food, then having a built-in way to perceive that texture differentially would give us an evolutionary advantage. However, if this quality is specific to our environment, then learning to perceive categories as more different becomes equally important.
Ok, having felt a bit like the thread had been lost reguarding the pursuit of CogSci I had to take a step back and examine why we’re doing a deep dive on Categorization. I can very much appreciate the importance of categorization as well as CP for the project that is cognition. The feeling-thinking-doing machines that we are, seemingly use categories as a main tool for well … life. I’m trying to come up with something to say without being trite. I guess I find myself experiencing what I imagine many a cognitive scientist have experienced that being a feeling of … “that’s it?” in where we are in our project of reverse engineering a cognizing machine. All of what we are in pursuit of explaining is behaivor that is fascinating and ultimately familiar (as it is very much our lives) and so can feel circular when having achieved a comprehensive explanation on a phenomenon (categorization). I am excited by the prospect of advances in science and technology pushing the envelope of cognitive science particularly with things like neural net models giving (in some sense) satisfying concrete mechanisms for how we do what we do in reguard to categorization. A lot of the article focused on observed phenomena (because what else would we focus on) for example with the refutation and ultimate disproval of the strong Sapir-Whorf hypothesis while still giving credence to the weak one which gives us hints as to how we work: how we do what we do, like perhaps having some inbuilt Darwinian CP (innate CP) thats “hardcoded” while having wiggle room for learned CP in order to maximize (not in a strict sense but in a Darwinian sense) our lives.
ReplyDeleteI’m aware this is quite all over the place so please let me know if you’d like these to be more polished/focused but I am merely writing my thoughts write after reading the article and all of the skies (my fault for continuously doing them so late and feeling I have not much of direct substance to add to the conversation and instead resort to head-in-the-clouds thoughts which bear minimal resemblance to the article at hand).
The paper The great eskimo vocabulary hoax tries to debunk the myth that Eskimos have an excessive number of words for snow. Anthropologist Laura Martin presented a paper in 1982 that exposed the falsehood of this claim, but it continues to be perpetuated in popular culture. The paper provides examples of how the myth has been repeated in textbooks, lectures, and media outlets. It emphasizes the importance of careful research and evaluation of assumptions in academic work. The Eskimo vocabulary hoax is a cautionary tale about the dangers of relying on stereotypes and uncritical sources. Overall, it provides a detailed and informative analysis of the Eskimo vocabulary hoax and its implications for linguistic research and cultural understanding.
ReplyDelete