Hi everyone, I was reviewing my the keywords and my notes and I'm a bit confused about what exactly underdetermination is and how it is related to the Hard Problem. What I have at the moment is that even if a hypothesis is extremely solid, we can't be certain, as we can only be truly certain about our own feeling and mathematical proofs (as Descartes said). Thanks for your help!
I think you're right Jess. And I believe underdetermination was also brought up when discussing UG? We have this data which is the way we talk. A lot of theories can explain the data but we cannot really know which one is right, this is a challenge Chomsky was facing. I am not sure if UG was brought up to say that it is just like any theory, not more likely to be right due to underdetermination, or if it "solved" the problem.
Jess is right. A scientific explanation is like a list of a category’s features. It is not necessarily correct, or complete; just probably.
And there may be different lists of features each of which can explain what is known so far.
This underdetermination is the gap between probability and certainty, both for everyday category learning and science. Both are approximate and can only become more probable, with more time and evidence.
Yes, UG is a scientific theory, so it too is underdetermined.
Not OG, though, because we make up the rules of OG, just as we make up the rules of chess.
I’m not entirely confident in my understanding of approximation in relation to language. I understand why approximation is relevant to categorization, as the invariant features we use to distinguish members of a category are approximate, and may be subject to change if we are exposed to a wider sample of members versus non-members. For language, is approximation related to “Pictures” vs. “1000 words”, where a description of a thing cannot capture what that thing is entirely? Are there other examples of approximation in language that we talked about in class?
Jess, I also understood approximation in relation to Pictures vs 1000 words. For computation, I believe it was related to the continuous vs discrete aspects, and the idea that with enough steps you could approximate continuous cognitive processes or something like that. I feel it is related to that but I am not sure...
Hi Emma, that is also my understanding for approximation in computation, and relates to the difference between analog versus digital systems. An analog system is continuous, which can also be described as a dynamic system. An analog system can be simulated by a digital computation system, but digital computation is discrete, so this simulation will always be approximate.
To link what you guys mentioned to sensorimotor learning, although language is quick and easy (like how we use keys to open the car, it's often an unfelt act), like you guys mentioned, "a picture is worth a thousand words". Even if we could always modify or ameliorate a verbal description, sensorimotor learning is what helps us ground feature-names since language only gives us approximations of categories.
“Language” is a lot of things: Syntactic rules (OG and UG), vocabulary (Category names and features), phonology.
OG is prescriptive; we make it up as we go along. Linguists have to learn the features of UG (though children can’t, so linguists infer that they must be inborn in their brains). But generative linguistics, so it remains underdetermined, and UG’s features are still only partly known.
Whether a mushroom is “edible” (for humans) is empirical, so both our learned sensorimotor feature-detectors (neural nets) and our verbal feature lists are underdetermined and approximate. The feature list can always be extended, if need be (as with a picture — or an object or the world — and its verbal description, in 1000 words or 10,000).
Important: ditto for a computational model. There too there too more features can be added to tighten the approximation,
OG is partly true by definition, because we invented it. UG, because of POS is hopelessly underdetermined (if it weren't for the fact that adults, including linguists, have an inborn ear for it.
This association with language is supplemental to my understanding of how “Pictures” vs. “1000 words” relate to underdetermination and approximation. I do agree that a picture might speak more if the 1000 words don’t even contain the key point or expression to communicate effectively. However, I also remember our discussion on the transition from "Show to Tell" which the latter allows more specific descriptions and is more adaptive. Are the two concepts analogous, and in what context?
Hi everyone! After reviewing the keywords document I am a bit confused about the topics “Darwinian survival machines” and “EEA (environment of evolutionary adaptiveness).” I have gone through my notes and do not see mentions of these—does anyone have any information on when we talked about these keywords, and how they relate to other class material? Thank you for your help!
Hi Shona! For EEA, you may find its definition in our 7a reading (started from p.362). However, this concept barely relates to the HOW part of EP but provide a little explanation about WHY we have the capacity of DOING.
I also have no cue about which week we talked about Darwinian survival machines (either week 7 or week 8). From my perspective, it seems like a way to explain what an individual is under the senses of both EvoPsyc (Darwinian survival; reproduction-direct) and CogSci (Machines; "cause-effect system"). Hope this may lead less confusion about this concept. Also, please let me know if I misunderstood or explain things in a wrong way.
EEA is the ancestral environment, as compared to the current one (as in the human child's fondness for sugar then and now).
Evolution gives adaptive (survival, reproduction) cause-effect explanations of organisms' anatomy, physiology, and DOing capacities, but not of the fact that we feel. For evolutionary explanation we may as well have been zombies, just doing what needs to be done to survive and reproduce (which includes the capacity to learn, to categorize, to communicate, to "mind-read" and to speak and understand -- but zombily). Its study has so far not revealed a solution to the HP.
The concept of “language” and “implicit VS explicit cognition” have an interesting connection in the example of playing instruments. Implicit cognition happens when someone can do a task without being aware of how they do it exactly. This relates to the “easy problem” of cognition which questions how we can do the things we do. When pianists learn a melody, they first rely on “language” to read the notes on the sheet, which then helps them guide the hand on the keyboard. At this stage, they rely on explicit cognition in multiple ways: by processing the notes through language, and by deliberately positioning their hands on the keyboard. With practice, the pianist learns to play at faster pace, meaning there is less time to deliberately process which notes come next and how to place the hand to play them. It becomes increasingly difficult to rely on explicit cognition, so the pianist relies on implicit cognition, primarily muscle memory. Playing using muscle memory is relying on what “feels” right — which hand position “feels” like it will play the melody that I’m supposed to. This relates to the “hard problem” which questions how we feel the things we feel, because it’s hard to tell exactly how and why this hand position feels like the right one. At this stage of learning, it can be very difficult for the pianist to play the same melody but slowly, since it forces them to rely on explicit cognition again, which decays without regular practice. PS: This helps explains why some musicians blank on stage and forget the rest of the melody they’ve spent months practicing. Personally, the moment I ask myself “what notes come next?”, I feel an inner “switch” and my muscle memory disappears. I’m forced to rely on explicit cognition and name the next notes in my mind, until I somehow “switch back” to implicit cognition again. That’s part of the reason why music professors will advise students to regularly practice the same melody both slowly and quickly, so that in case of a “switch” to explicit cognition, you can finish the performance.
Anaïs, very good account, but the explicit/implicit distinction does not align exactly with the felt/unfelt distinction, although it is often correlated with it. And it certainly does not solve the HP (of explaining how or why we FEEL at all rather than just DO).
We didn’t talk about selective attention much this year, but consider that many activities are going on in the brain at once, and the focus of the activity can change (e.g., from what’s on my left to what’s on my right in my visual field) just as my movement can. There is no explanation of why a selective shift in attention (or intention — or movement) has to be FELT, rather than just DONE, zombily.
This is also why it is a misunderstanding of category feature learning and grounding to think that you need to FEEL the “taste” of an apple to learn to ground the category. Unfelt chemical detection would do the same job. “Neural nets” need not feel; they need only detect features of their sensory input. Robots too,
(It is not even clear why a Chinese T2-passer needs to FEEL what it’s like to understand Chinese: Why isn’t just talking the talk and walking the walk enough? Yet Searle correctly reminds us that it does feel like something to understand, and that that is completely missing when he executes the T2-passing program. Therefore cognition is not (just) computation.)
I think we decided it would be on the weekend after the last lecture, with a long time to respond (but enough time for me to hand in the marks in time).
Hi, I don't know if I missed it --I know that there hasn't been an email sent but-- is there any updates about the format/instructions for the final essay and its deadline? Also should we send our skywritings now?
From my understanding, it will be released on Monday with all the instructions and due on the 21st. We will be given a graph of important concepts and will have to describe the concepts as well as their connections to one another.
32. In trying to wrap this semester’s concepts together, I find myself returning again and again to the idea of certainty, and the question of when and why can we be certain that something is what we think it is. Certainty of feeling, as Descartes describes it, asserts that our observations are not trustable, and we can never, in fact be certain. But as we bring that around to our discussion last week around I wonder if there is any way, with Descartes’ certainty, to have a confident understanding of cognition, or is our only certainty that we can never know? Namely, in the case of cognizing individuals (by which I mean those who are capable of acting differently and appropriately to different categories of stimulus, as per ‘To Cognize is to Categorize’), the ability to be certain that another is cognizing eludes us, no matter how much thought we throw at the problem (OMP). I think that one thing I have grown to appreciate much more over this semester is how semantics bring us even further from any sort of certainty, as being careless or imprecise with language when talking about cognition leads to the rise of weasel words, and makes an already large challenge seem impossible. Developing any sense of certainty on the matter of cognition mandates that we are precise in our language, and DO get bogged down in the semantics, and that we do what we can to identify our own limits to introspection, and through that an appreciation of what we can and cannot know, to avoid overstating claims and asserting certainty where it is not justified.
Hi Madeleine, I agree that this semester has emphasized the importance of using precise language and avoiding what we refer to as "weasel words". Moreover, distinguishing why these words are "weasel words" has shed light on ways people often avoid answering the real questions of cognitive science. For example, a word like "representation" is "weaselly" because it is homuncular, meaning that it implies some little man in our head is orchestrating our thoughts/feelings. This leads to useless explanations because we then have to explain the causal mechanisms in this little man's head which give rise to his functions. This particular example of the word "representation" was interesting to me, because I so often encounter the term "mental representations" in neuroscience and psychology, and now see ways in which these scientific writings may avoid the "black boxes" of cognition.
Distinguish between what is probably true and what is certainly true. All scientific truths are only probable, not just those of cognitive science. And they are all undetermined. The OMP (determining whether and what others' feel) adds another layer of uncertainty to underdetermination, but it still remains a matter of probability.
I think it's not so much a matter of "precise" language as simple, kid-sibly language...
Hi Adrienne, I believe vanishing intersections was mentioned in the 'To Cognize is to Categorize: Cognition is Categorization' reading. The "vanishing intersections" problem is a challenge to the idea that categories can be learned or evolved based on shared sensory features. The problem arises because when we look at the sensory shadows of different things, we often find that there is no single feature or set of features that is shared by all members of a category. This makes it difficult to identify what it is that defines a category, and suggests that some categories may be innate or hardwired in the brain rather than learned through experience.
Uncompleted categories ("Layleks") refers to a category where one is only exposed to members of the group, but is never exposed to non-members. Therefore, it is impossible to determine what features are important to distinguish members of this category from others, because one is never exposed to feedback which would illustrate the unifying features of the members of the category and the features that differentiate members from non-members. For example, UG is an example of uncompleted categories--children are never exposed to what is not UG (they lack negative evidence).
Lélek means "soul" in Hungarian, which is uncomplemented, because soul is just a weasel-word for feeling, and we don't know what isn't a feeling. Our whole lives we have done nothing but feel and can't imagine what a non-feeling would be.
Yes, the Blind Watchmaker is a refutation of the intelligent design argument. The idea behind the original argument is that if you find an object whose form seems unusually suited to its function in a way that couldn't have happened by chance - like a watch - then you would logically infer that someone - like a watchmaker - intentionally designed that object. Applied to the adaptive nature of biological organisms, this argument concludes that these organisms must have been designed by an intelligent agent, who was consciously working toward a goal. Evolution is a blind watchmaker because it's a mechanism by which form can be developed to fit function without intelligent design being involved. Yes, the evolutionary process is working toward adapting the form of an organism to its function (survival and reproduction), but it's not doing so in a conscious or systematic way. Instead, random mutations in the genetic code are like "blind" jabs at improving the organism, because they may or may not improve its chances of survival and reproduction. Progress is made because the random mutations which are advantageous to survival naturally stick around, and over time they build up.
Evolution is blind variation in genetically coded traits and selective retention: but the selective mechanism is also the blind one of whether the genetic trait helps or hinders survival and reproduction. If it helps, it's still there in the progeny and gene-pool in the next generation. If not (because it didn't survive or reproduce), it's gone.
Should remind you of trial/error/feedback learning...
Hi guys! In class we briefly talked about the Strong Church/Turing Thesis and how it’s similar to language. We talked about how saying that a cat is on the mat is not the same as a cat actually BEING on the mat. This part I understand, but I was wondering if someone could clarify similarities/differences between the Strong Church/Turing Thesis and language because I am not sure I understood the explanation provided in class.
Hi Lili! I’m also having some trouble understanding the differences between the two, but I think I can say a little bit about how they’re similar. The professor presented language as an alternative to computation in terms of modeling/simulating things, like how you wrote “the cat is on the mat” in a natural language. In my understanding language is just a different option for fulfilling the Strong Church-Turing Thesis, since we can use language to simulate anything computations can.
As an aside - does the mental image I get of a “cat on a mat” when I read the sentence count as the simulation?
Sorry if I answered your question with more questions! Hopefully other people will chime in.
You can simulate/model a cat on a mat computationally, or describe a cat on a mat verbally. Neither of these IS a cat on a mat. One encodes the features of cats on mats computationally, the other encodes the features verbally. Both do it approximately, not exhaustively, but the approximation can be made as close as we like, by adding more features (parameters).
Computability is like describability, and computations can be encoded verbally.
Hi professor, I'm still slightly confused about the difference between language and computation in this case. Are the two direct equivalents in this situation? I had previously thought that computation was a type/subset of language but at your talk yesterday you seemed to suggest otherwise. Thanks in advance!
Computation is a purely formal subset of every language. 2+2=4 is a sentence in English, French, Chinese, and even Hungarian. So is the algorithm for calculating the roots of quadratic equations. But maths does not use the meaning of the math symbols (if there is one): just the shape.
Hi guys! I was wondering if someone could briefly explain how the kernel, core, and MinSet relate to one another? I know we went through it but I'm having some trouble understanding my notes. Thanks in advance!
A dictionary is a set of words in which every word is defined by words that are themselves in the dictionary.
The Kernel (a dictionary within the dictionary) is what's left when you remove every word that does not define any further words, but can be defined from the words that remain. So the Kernel can define inward as well as outward (all the rest of the dictionary).
The Core (which is also a dictionary) is the “biggest strongly connected subset” of the words in the Kernel. There is a definitional path to and from every word in the Core to and from every other word in the Core. The Core can define inward, but not outward.
The Minsets are the smallest number of words within the Kernel that can define all the rest of the words outward (all the rest of the words in the dictionary).
A dictionary has only one Kernel and one Core.
But there are many MinSets in the Kernel, all the same minimal size. All are able to define outward. But none of them is a dictionary: Why not?
I'm not sure if this is correct, but is the reason that no MinSet is a dictionary just that it would take us way too long to communicate anything when you could simply use derived words (as well) where the meaning of each word is a combination of a bunch of words in the MinSet, which is much (MUCH) more efficient? Therefore no natural human language dictionary would contain only MinSet words; every language uses these derived words as well. (I feel like this is at least somewhat wrong because wouldn't this make the Kernel and Core not dictionaries as well... what am I missing?)
Hi Jordan, I think it might be simpler than you're making it. From my understanding, a MinSet isn't a dictionary because it can only define all of the words outside of itself. If it could define the words within it, too, it would just be a Kernel. Since a dictionary is just a set of words in which every word is defined by words that are themselves in the dictionary, a MinSet couldn't be a dictionary because if it defined all of the words in the dictionary (ie. within itself), it wouldn't be a MinSet anymore, and would just be a Kernel.
Hi guys! After reviewing the terms list I’m still a bit confused about underdetermination. From what I’ve gathered, this is the problem where a hypothesis/theory can’t be sufficiently explained from the information available in the environment, or, you can’t use the amount of available evidence to support one view or another. I think this relates to Chomsky’s poverty of the stimulus argument, with the UG proposal as a way to respond to the problem of underdetermination of environmental evidence.
If I’m connecting the term to the wrong unit or I’ve defined it wrong please let me know!
All empirical hypotheses are underdetermined: they are merely approximate and probable, not exhaustive and certain, like a maths proof. But the approximation can always be tightened, both by adding more sensory feature-detectors, and by enhancing the computational simulation or the verbal description ("1000 to 10,000 words").
Yes, this is related to POS, but in the special case of UG, it is more radical: It's not that you haven't yet sampled enough positive and negative examples (members and non-members) to learn their features by trial and error: You have not sampled any negative examples at all (unless you are an adult MIT linguist with an "ear" that can detect UG errors, because UG is already innately encoded in your brain -- OR because of some sort of universal constraint on the nature of verbal thought).
When looking at the keywords, I realized I didn't know what the distinction was between mental states and felt states. My first instinct was to assume that not all mental states are felt states, but if thinking is felt, shouldn't all mental states be felt states? Are there mental states that are unfelt?
"mental" and "mind" are weasel-words. All "mental" states are felt states and all felt states are "mental." (To have a "mind" is to have the capacity to feel.)
Prof Harnad, in every one of my skywritings, I try painstakingly to avoid weasel words, a practice influenced by your teachings! However, I'm not sure if someone made this comment already, but I can't help but notice the presence of one such word in our course title... "Categorization, Communication, and CONSCIOUSNESS." Consciousness is one of your least favourite weasel words as it's just a fancy word for feeling. Although the name CATCOMCON is quite iconic and CATCOMFEEL does not have the same effect, the rationale behind its inclusion intrigues me!
Miriam, you are absolutely right! "Consciousness" is a weasel-word.
And it is ironic (and perhaps a little confusing) that it appears in the name of the course.
The course has evolved across the years, but my addiction to alliteration is not the only culprit.
I've thought about changing it, but although "feeling" is not a weasel word, almost everyone (until they’ve taken this course) assumes it just means emotion. And although emotion is indeed a felt state, it's only one of many kinds felt states (sensation, warmth, hunger, meaning, understanding, thinking) that, before taking this course, most people don’t realize are felt states too .
So I have left the C-word in the title, and then taught in the course why it's a weasel-word. Maybe I'll put it in scare-quotes -- ...and "Consciousness" next year.
I think that it's great that Consciousness is in the title of the course for the reason Prof. Harnad gave and also because it lures in people who think that Consciousness is something beyond and more mystical than simply feeling (such as myself a few months ago), which are the people who most need to take this course, to learn that that is not the case.
1. A la my classmates, I’m going to write out my understanding of some of the topics I feel a bit fuzzier on, to test my understanding. Broadly, approximation refers to the process of becoming more accurate in categorizing. We talked about approximation in two respects: First, computational approximation, which can be maximally improved until there is a certain and precise grounding of numbers, theorems, and axioms where they have a single, stable meaning. In mathematics, approximations can be reduced to zero. Second, language approximation refers to the fact that the words we use are necessarily ranging in scope, and depending on the purpose our language is carrying out, we may wish to be more or less precise in the level of detail we convey. e.g, I can call our school McGill, or university, or establishment, but if what I want to convey is an idea specific to McGill and not other universities or establishments, I need a less approximate term. That said, there is no promise that a word carries the same meaning from person to person, and the approximation in communication /between/ individuals creates variability and unclarity in categorization. An optimal language would be one in which all speakers share the same definitions for each word, as information would be communicated in the most efficient way under these circumstances. A key difference between mathematical and language approximation is that mathematical categories can be exact, without any room for approximation, which is why we can have mathematical certainty but not certainty in language. Let me know if I’m getting this wrong, or if anyone has anything to add! Thanks!
Don't mix up (1) computatioon in maths (theorem-proving, problem-solving), which can be exhaustive and exact, hence certain, with (2) computational modelling in empirical science (e.g., of gravity, or an ice-cube) which approximate and a matter of probability on the available evidence so far.
For language: propositions describe features. They are finite in length. But you can always lengthen them with more features.
Don't mix up approximation/underdetermination with polysemy (which is that words can be ambiguous, or can have multiple meanings, or a different grounding ((MinSet) for you and for me.
Hi guys, while reviewing my notes I wrote down s/s (spiders/sex) many times when we did evolutionary psychology. However, I am having a hard time understanding what that really refers to now… Could anyone clear this up?
Hi, I understood it to represent the main evolutionary pressures, which are to avoid anything harmful and to reproduce. I understood it to be related to other concepts such as the Hard Problem in that s/s provides an explanation for many things such as behaviors or our anatomy, but when it comes to understanding where feelings come from, the s/s explanation does not really help. It helps in explaining nociception, but not the actual feeling of pain.
Computational models of the physical world (ice-cubes, rockets, gravity, T3 cognitive capacity) are approximate.
But computation within mathematics (applying an algorithm to calculate the roots of a quadratic equation, proving theorems on the basis of deduction from formal axioms), is exact, because it is just formal, not empirical, like scientific modelling.
“An apple is a round, red fruit” is approximate and revisable. It could be made more and more exact by adding 10,000 more features. It could even turn out to be wrong for certain future exceptions we discover (are quinces apples?).
“An even number is a number divisible by two” is a definition, Just as “a bachelor is an unmarried man” is. (These are examples that have been done to death by philosophers.)
Think of approximation as the features distinguishing the members from the nonmembers of a category. Neural nets, and scientists, have to find the features, and they can always turn out to have been wrong, or not enough. But in maths (tautologies) they are just a formal agreement on how to use symbols.
(I think language is "optimal" enough; it's knowledge that's wanting. There's always room for 10,000 more words.)
From my understanding the Whorf Hypothesis states that our different languages influence the way in which we experience the world. For example, a particular language may make the distinction between two shades of blue and have commonly known names for them, therefore speakers of this language will know this difference. In the list of words there is the “Whorf hypothesis (Strong vs Weak)”, I am not sure I understand the strong vs weak aspects of the hypothesis. Can someone explain this?
Hi Maria! The strong/weak Whorf hypothesis was discussed in Week 6b. From my understanding, I think that the weak Whorf hypothesis is closer to the example that you provided, where language influences how we experience and categorize the world. The strong Whorf hypothesis is that language completely shapes how something is perceived. For example, the strong Whorf hypothesis would say that we perceive colours as distinct, and categorically separate, in the rainbow because colours are named categorically.
Learned categorical perception, in the context of the Whorf Hypothesis, states that language learning influences the way individuals perceive and categorize stimuli. Learned categorical perception can be considered an example of the weak Whorf hypothesis, where language plays a role in shaping categories but doesn't impose rigid and innate boundaries.
Hello, apologies for all my questions… I hope that they can also help clarify things for others. Just while going over the list of words, I am unsure what “system reply” means in the context of this course. Is anyone familiar ?
Hi Maria! The Systems Reply is linked to the symbol grounding problem. It was a reply to Searle's Chinese Room argument, saying that even though Searle/the individual in the room may not understand the characters he's manipulating, the "system" he is part of (the room, the rule ledger, the data banks of Chinese characters, etc) DOES understand. Searle's response is to allow the individual to incorporate all the above components, ex. by memorizing them. Even so, he still doesn't understand the Chinese symbols, and therefore neither could the system. If you want to look at the specific reading it's Searle's "Minds, Brains, and Programs" around page 5. Hope this helps!
That's it. But Searle has to not only memorize the Chinese T2-passing algorithm, but execute it while being T2-tested, yet able to tell us in English that "while I do that, I do not understand the symbols."
Hi everyone, I'm still struggling a little with the concept of Induction vs. Instruction, could someone clarify overall how this applies to this course in a kid-sib way :)
Hi Mal! I think that the concept of instruction vs induction is covered in week 8/9 (especially the Blondin-Massé et al paper). As I understand it, this concept applies to the course in that it refers to a distinction within categorization, where categories can be learned through induction (direct experience) and instruction (through language). I think that this distinction highlights the importance of language for how we are able to do what we are able to do, and how advantageous it was to evolve to use language the way we do. The Blondin-Massé et al paper demonstrates, through an artificial life simulation how categories are learned through induction vs instruction, and the advantages that being able to learn categories through instructions provides. The paper also highlights that there are some instances where learning through direct experience (induction) is more beneficial than learning by being instructed by another. Therefore, I think this concept of induction vs instruction relates to the course in that it is concerned with the symbol grounding problem (there must be some induction, before instruction can occur), the evolutionary adaptiveness of language, and the evolution of language (as moving from showing to telling).
Yes, induction is learning to categorize directly by learning -- through sensorimotor trial and error supervised learning -- to detect the category's distinguishing features. Instruction is learning the features indirectly by being told. But the names of the features already have to be already learned, named, hence grounded categories.
Based on what we discussed in class, I consulted chatgpt on why the hard problem is "hard", as has always seemed a mystery: 1. Subjective Nature: Consciousness is inherently subjective, making it difficult to objectively analyze or measure. Unlike observable behaviors or neural activity, subjective experiences cannot be directly accessed or quantified by external observers. 2. Lack of Physical Explanation: The hard problem challenges us to explain how physical processes, such as neural interactions, can create subjective experiences. Current scientific methods are adept at explaining physical phenomena but fall short in explaining how these give rise to subjective experiences. 3. Qualia: This refers to the individual instances of subjective, conscious experience, like the way it feels to experience color. Understanding how and why these qualia occur from physical processes remains a significant challenge. 4. Explanation Gap: There is an explanatory gap between our understanding of physical processes and the emergence of subjective experiences. While we can describe and increasingly understand the workings of the brain in detail, this knowledge doesn't straightforwardly translate into an understanding of consciousness.
To put it in my words, it would be more due to a lack of negative evidence for feelings. The importance of causal explanations in understanding cognitive processes has been emphasized throughout, which points out the limitations of behaviorism and highlights the need for reverse engineering to uncover the mechanisms behind observable behaviors (EP). Yet most papers being referenced either suggest correlations to why we have feelings (since they could not be manipulated), or question even the existence of feeling / the HP. The explanatory gap hence remains with little substantial contributions made.
Yes, but see other replies about (1) "uncomplemented categories" in this thread, and about (2) causal "degrees of freedom" in other threads: the solution to the EP will have used them all up, leaving none to the HP.
A just-so story is just an interpretation or analogy; it is not tested (and, often, untestable) speculation. Spandrels (what are they) do not explain the evolutionary origin or adaptive value of UG.
From my understanding, the term ‘spandrel’ was used by Gould and Chomsky as an analogy to describe the origins of human language/Universal Grammar because they thought that language is the byproduct or side effect of evolutionary biology. This is opposed to the belief that languages directly evolved through natural selection. It is still unclear about how universal grammar came to be, therefore both of these views are just-so stories. Is that correct?
I don’t recall coming across term 120 (volition) this semester and I was wondering if anyone could explain it to me and situate it in the context of our course?
Hi Jocelyn, I also don't recall encountering this term in this course, but am going to make an educated guess! I think that we require any T3 (or T2?) candidate robot to act with volition in order to capture human "doing" capacity, rather than act according to a particular algorithm or set of rules. The way that a T3 robot interacts with the world and comes to ground symbols through these sensorimotor interactions is all voluntary, perhaps even exploratory, behavior. On the other hand, a robot that is given exact instructions of when to move its hand or pick up an object does not act with volition, and would fail at capturing human behavior (thus getting us no closer at our goal of reverse engineering human's "doing" capacities). Not sure if this makes sense/is at all correct...
I had the same question. To be honest, "Volition" sounds a bit like a W-Word, easily replaceable with "will", "choice", or even "consciousness" (though maybe that's due to the weasel wordiness of "consciousness"). Jessica, I think you're right about the definition of volition as doing things voluntarily, being able to make one's own choices, but the significance of it is still lost on me... Whether or not a robot, or even a person, has "volition" would just be a reformulation of the OMP, right?
Maybe it could be understood in the context of active learning?
I had an idea for next year's course while thinking about the thought experiment regarding a student being a T3 robot, and we wouldn’t be able to distinguish her from a human. It would be an interesting experiment to create a fake profile on the blog from which the comments are generated only by ChatGPT and see if the students notice it. That would be like a pen-pal Turing Test during course time. With ChatGPT’s impressive progress and good prompting, I’m sure most students would be fooled. I know it requires some time, so maybe a TA or a student could do that job. When I had that idea, I was persuaded that Professor Harnad would have already done it, so I checked the comments and the profiles, but I didn’t find a suspect one.
I know what absolute VS relative judgement means but I don't remember in what context we discussed it in this class. I also don't remember talking about anthropomorphism.
We have studied the concept of relative vs. absolute judgments during week 6. I remember learning about this distinction while reading the 6a article “To Cognize is to Categorize: Cognition is Categorization”. Categorization can be seen as an absolute judgment (since it’s based on identifying an object in isolation) but it is also relative (in the sense that what invariant features need to be selectively abstracted depends on what the alternatives are). Sections 10, 20 and 23 might help you. If I’m correct, anthropomorphism was studied in the context of animal sentience. The term was mentioned in the 11a article where Bryan explained that we should not attribute to fish the ability to feel pain simply because they attempt to escape the noxious stimuli (because they have detectors) and by Professor Harnad in the 11b article when he mentioned that the dictates of our mind-reading abilities are easily dismissed as “anthropomorphic” illusions when there is a financial, personal, or scientific interest behind it. Hope this helps Jocelyn!
This term could be understood as: how do we know whether animals or so-called lifeless objects, like rocks or trees, don’t feel? And of course it is related with OMP. I suggsuggest we should understand it under the content of ChatGPT. When havin conversations with it, we feel like it is "talking" rationally with us; thus it must be cognitive. But this is simply an illusion brought by our mirror capacity, and there should be no difference between kids think a rock could talk if they see the rock has drawn eyes and mouth.
I think that's correct! "Stevan says" that Turing would have been in favor of a weak equivalence solution to the easy problem, in that if a robot has the same doing equivalence as a human it could pass the Turing test, regardless of if the mechanisms for doing so are the same as those that humans have. For Turing, weak equivalence is enough.
After our discussion in class yesterday, I am still somewhat confused about Baldwinian evolution. I understand that a Baldwinian trait is one that handles a capacity to learn, but I do not quite understand how that is a type of evolution in the same way that darwinian evolution is. I would think that baldwinian traits would be selected for because of the adaptiveness of the ability to learn rapidly, in the same way that all traits are. In other words, wouldn't baldwinian traits still be influenced by darwinian evolution? My apologies if my question is not quite clear.
I think it's considered a 'different type' of evolution compared to Darwinian evolution because despite it being at trait that is passed down (which is evolution), it's the ability to learn to do a certain thing that is passed instead of the behavior or structure itself being passed down. I wonder if it's right to say that it is a subset of Darwinian evolution in that regard in that it is a different kind of trait getting passed down.
I think the only extent to which Baldwinian evolution is a subset of Darwinian evolution is in that BE selects for general learning ability in a Darwinian fashion (because BE depends on learning). But Megan, I think you're right that Baldwinian traits would still be influenced by DE, the two don't seem easily separable.
Baldwinian evolution does indeed involve the ability to learn, but it's considered a distinct from Darwinian evolution because it focuses on the transmission of the ability to learn specific behaviors or skills rather than the direct inheritance of those behaviors or skills themselves. It's like inheriting the potential to learn a language rather than inheriting the language itself.
However, you're correct that the two are intertwined. Baldwinian traits can still be influenced by Darwinian evolution, as the capacity to learn rapidly could provide a survival advantage in certain environments.
From my understanding, multiple realizability refers to the idea that a particular outcome can be reached in many different ways or by different processes. I believe it is heavily related to weak equivalence, which occurs when two devices provide the same output when given the same input, however, their algorithms or way in which they compute the output is different.
From my understanding, Universal Grammar is a set of innate fundamental rules/similarities that apply to all languages. However, I am still a bit confused on this topic, can we give an example of what one of these rules might be?
Hi Maria, that is one way to define UG, but I think a better way to look at it from the perspective of the course is as follows: Children are able to learn language, i.e., which sentences are gramatically allowed and which sentences are not gramatically allowed, even though they have no negative evidence, i.e., the sentences that are not gramatically allowed. This is because they never hear sentences that are not gramatically allowed, e.g., "Bill said he would do so the window on the second floor.". Because of this, children cannot distinguish from evidence the difference between grammatical sentences and ungrammatical sentences, yet they produce their own novel grammatical sentences all the time. The fact they are able to generate new sentences that follow these rules despite never learning what these rules are suggests theat the rules must be already there, inborn in all humans. The "fundamental rules/similarities" are just whatever rules distinguish grammatical sentences from ungrammatical sentences, what they are specifically is not important for us, just that they are always obeyed and never violated.
Thank you for that explanation Jordan, it has cleared things up! However, I can't help but think that these "innate rules" are more just imitation... The fact that children are not exposed to many breaks in grammar rules would make me think that they are just modelling their parent's use of the language and that there is no inborn grammar.
Hi Maria, I get what you're saying about how children could simply mimic their parents' speech. But linguists, especially Chomsky, say that sentence structure (syntax) is too complicated for kids to learn just by imitating. The idea is, even though kids only hear a few kinds of sentences, they still create new, correct ones they've never heard. Chomsky and others who support Universal Grammar think syntax isn't just about copying—it's about having a deep, natural sense of the basic structure that all languages share. Chomsky believes that learning syntax cannot be only attributed to mimicry or learning. The argument is that there's an inborn structure helping them understand and make the intricate patterns in language.
I have a couple terms from the list that I was hoping to get some clarification about. I am still unsure what the dictionary-go-round is, as well as the explanatory gap
Hi Zoe! From my understanding the dictionary-go-round refers to the idea that if an English speaker had let’s say a mandarin dictionary, they would attempt to look up a word, then this would send them to look up other words. Essentially, they would cycle as they do not speak mandarin and cannot make meaning out of arbitrary symbols.
Hi Zoe! From what I've learned in my philosophy of mind class, the explanatory gap is the difficulty (central to the mind-body problem) we have in explaining how the body/the physical gives rise to our felt states. Assuming that feeling is an effect of physical brain events, the gap has to do with explaining its causality (the how and why of the HP). Hope this helps!
Hi Zoe! From my understanding, the explanatory gap just refers to the gap between physical (observable) processes and our subjective felt states. It was first coined by Levine (1983), and it basically suggests that even though if we can understand all about physical processes and properties, we are still left with the question of how do physical processes give rise to the subjective quality of felt states. This gap would be closed if we knew the answer to the Hard Problem.
I think Maria is right. Multiple realizability is just many ways to get to one output, such as different paths you take to school will take you to the same location. The professor said, "shit happens, but it happens differently." For example, fish and humans both feel pain, retract from harmful stimuli, even though we differ in neurophysiological and anatomical systems, it still generates feelings of pain.
We discussed vanishing intersections in class yesterday, however I am not sure I understand this topic and am having a hard time making sense of the notes that I did write down. Could anyone explain this concept to me?
I read the discussion posts on vanishing intersections and the "To Cognize is to Categorize" reading and was confused in class too, so I would also appreciate it if someone could very kid-sibly explain this!
The Vanishing Intersections argument says that categories cannot be learned OR evolved, because it seems impossible to find consistent features in the "sensory shadows" (as I understand it, "sensory shadows" basically means our perceptions of stuff and ideas that are a bit removed from the world because there is the middle-man of perception. "Retinal Shadow" of visual stimuli is a more literal example.). The example in Harnad's paper illustrating this difficulty is trying to find the similarities between the sensory shadows of "beauty", "truth", and "goodness". So when searching for "invariance" (consistent features) in categories, the intersection of sameness is supposedly empty, so categories must be innate in some sense.
But as Harnad points out in "To Cognize...", we do still have the capacity to categorize, and need to account for that ability, so we should reject that they're innate and inexplicable and assume that we can, in fact, categorize.
34. One of the other challenges that I’ve enjoyed trying to wrap my head around during this class is how and whether we can confidently link components of cognition (e.g, decision making, motivation, memory) to discrete neurophysiological elements. To me, one of the central issues facing neuroscientific research today is understanding the way the brain organizes itself, and the level of analysis that matters for different processes (if one can be causally implicated at all). I think that mirror neurons are an interesting example of us identifying a very precise and replicable pattern of activity but being unable to carry it much further. Although research on mirror neurons ultimately haven’t gotten us much closer to understanding any neural substrates of cognition, I don’t think it represents a complete loss for neuroscientific research. In fact, I think the continued mystery of HOW MNs do what they do illustrates that we’re not looking at the right level of analysis, at least not always. A growing body of research looks at population dynamics – rather than trying to correlate single neuron firing to cognitive functions, instead taking the aggregate activity and seeing how it evolves in high dimensional space (a space represented by as many axes as there are meaningful patterns of activity). This can give us new perspectives on activity, and clarify correlations and indeed causal relationships between neurons and behaviour that are otherwise unidentifiable.
Hi! Could anyone go over the concept of lazy evolution again? I think I got the general idea (evolution is a mindless process, relying on the environment, that only cares about satisfying not optimality ; and the fact that it's not telling us how and why we can think and feel but only if we do) but I'd like more clarification or a more kid-sib explanation. How does it relate to the imprinting process we mentioned and why was it said that the symptoms of evolution's laziness are learning and language? Thank you :)
Hi Mamoune! Lazy evolution is about providing the tendency or motivation to learn, rather than specific knowledge itself. It's like Darwinian evolution equips us with the tools and a strong desire to figure things out, but doesn't spell out all the answers. This is especially clear in the case of language learning. We're not born with a built-in language but instead, we have the innate motivation to learn language, a perfect example of such 'lazy' aspect of evolution. In contrast, all learning can be seen as Baldwinian — what we learn isn't hardcoded in our genes, and that it does not account for our capacity and eagerness as factors. I hope my interpretation helps.
I would like to share my thoughts after chatting with ChatGPT regarding the cognition and computation, which contains weasel words. ChatGPT suggested that cognition and computation share a similarity in "symbolic representation," implying that both involve using symbols to represent information. However, this statement might be unclear and confusing, as the term "representation" can have different meanings in various contexts.
Hey guys, I know this may be quite a late question, but after the class on Friday, I am still kind of confused with the poverty of stimulus that we went over in class. I took the note down that these are uncomplemented categories, and that the one that obeys ordinary grammar is a positive evidence but the ones that violate it are negative evidence (which is what we are lacking right now), but I am quite lost on how these concepts are all tied in together?
Under my understanding of referents, the emphasize of grounding process is not about whether there is a referent corresponding to the physical world. The importance of sensorimotor system, is that the system plays an essential role in first language acquisition, which later language learnng involves more indirect grounding with more abstruct "meaning" of symbols. It is also not important that whether the word is iconic or symbolic, what matters are those features, or referents, making cognizors be able to distinguish members and non-members.
My understanding in terms of language and computation is that: the syntax part of language is computation, which is performed well by ChatGPT as well. And for semantic part, it includes direct grounding process with its referents; this part is also achievable by T3 robots. Additionally, the feeling that we feel we understand the language plays the HP part in understanding of language, and there is so far no way to solve it, because of the barrier from OMP.
Hi everyone, I was reviewing my the keywords and my notes and I'm a bit confused about what exactly underdetermination is and how it is related to the Hard Problem. What I have at the moment is that even if a hypothesis is extremely solid, we can't be certain, as we can only be truly certain about our own feeling and mathematical proofs (as Descartes said). Thanks for your help!
ReplyDeleteI think you're right Jess. And I believe underdetermination was also brought up when discussing UG? We have this data which is the way we talk. A lot of theories can explain the data but we cannot really know which one is right, this is a challenge Chomsky was facing. I am not sure if UG was brought up to say that it is just like any theory, not more likely to be right due to underdetermination, or if it "solved" the problem.
DeleteJess is right. A scientific explanation is like a list of a category’s features. It is not necessarily correct, or complete; just probably.
DeleteAnd there may be different lists of features each of which can explain what is known so far.
This underdetermination is the gap between probability and certainty, both for everyday category learning and science. Both are approximate and can only become more probable, with more time and evidence.
Yes, UG is a scientific theory, so it too is underdetermined.
Not OG, though, because we make up the rules of OG, just as we make up the rules of chess.
I’m not entirely confident in my understanding of approximation in relation to language. I understand why approximation is relevant to categorization, as the invariant features we use to distinguish members of a category are approximate, and may be subject to change if we are exposed to a wider sample of members versus non-members. For language, is approximation related to “Pictures” vs. “1000 words”, where a description of a thing cannot capture what that thing is entirely? Are there other examples of approximation in language that we talked about in class?
ReplyDeleteJess, I also understood approximation in relation to Pictures vs 1000 words. For computation, I believe it was related to the continuous vs discrete aspects, and the idea that with enough steps you could approximate continuous cognitive processes or something like that. I feel it is related to that but I am not sure...
DeleteHi Emma, that is also my understanding for approximation in computation, and relates to the difference between analog versus digital systems. An analog system is continuous, which can also be described as a dynamic system. An analog system can be simulated by a digital computation system, but digital computation is discrete, so this simulation will always be approximate.
DeleteTo link what you guys mentioned to sensorimotor learning, although language is quick and easy (like how we use keys to open the car, it's often an unfelt act), like you guys mentioned, "a picture is worth a thousand words". Even if we could always modify or ameliorate a verbal description, sensorimotor learning is what helps us ground feature-names since language only gives us approximations of categories.
Delete“Language” is a lot of things: Syntactic rules (OG and UG), vocabulary (Category names and features), phonology.
DeleteOG is prescriptive; we make it up as we go along. Linguists have to learn the features of UG (though children can’t, so linguists infer that they must be inborn in their brains). But generative linguistics, so it remains underdetermined, and UG’s features are still only partly known.
Whether a mushroom is “edible” (for humans) is empirical, so both our learned sensorimotor feature-detectors (neural nets) and our verbal feature lists are underdetermined and approximate. The feature list can always be extended, if need be (as with a picture — or an object or the world — and its verbal description, in 1000 words or 10,000).
Important: ditto for a computational model. There too there too more features can be added to tighten the approximation,
So language relates to approximation because we approximate UG by trying to uncover its features through theories?
DeleteOG is partly true by definition, because we invented it. UG, because of POS is hopelessly underdetermined (if it weren't for the fact that adults, including linguists, have an inborn ear for it.
DeleteThis association with language is supplemental to my understanding of how “Pictures” vs. “1000 words” relate to underdetermination and approximation. I do agree that a picture might speak more if the 1000 words don’t even contain the key point or expression to communicate effectively. However, I also remember our discussion on the transition from "Show to Tell" which the latter allows more specific descriptions and is more adaptive. Are the two concepts analogous, and in what context?
DeleteYes, 0bjects/Images are shown, not told. But categorizing objects/images requires feature-detection, which is, like verbal description, approximate.
DeleteHi everyone! After reviewing the keywords document I am a bit confused about the topics “Darwinian survival machines” and “EEA (environment of evolutionary adaptiveness).” I have gone through my notes and do not see mentions of these—does anyone have any information on when we talked about these keywords, and how they relate to other class material? Thank you for your help!
ReplyDeleteHi Shona! For EEA, you may find its definition in our 7a reading (started from p.362). However, this concept barely relates to the HOW part of EP but provide a little explanation about WHY we have the capacity of DOING.
DeleteI also have no cue about which week we talked about Darwinian survival machines (either week 7 or week 8). From my perspective, it seems like a way to explain what an individual is under the senses of both EvoPsyc (Darwinian survival; reproduction-direct) and CogSci (Machines; "cause-effect system"). Hope this may lead less confusion about this concept. Also, please let me know if I misunderstood or explain things in a wrong way.
EEA is the ancestral environment, as compared to the current one (as in the human child's fondness for sugar then and now).
DeleteEvolution gives adaptive (survival, reproduction) cause-effect explanations of organisms' anatomy, physiology, and DOing capacities, but not of the fact that we feel. For evolutionary explanation we may as well have been zombies, just doing what needs to be done to survive and reproduce (which includes the capacity to learn, to categorize, to communicate, to "mind-read" and to speak and understand -- but zombily). Its study has so far not revealed a solution to the HP.
The concept of “language” and “implicit VS explicit cognition” have an interesting connection in the example of playing instruments. Implicit cognition happens when someone can do a task without being aware of how they do it exactly. This relates to the “easy problem” of cognition which questions how we can do the things we do. When pianists learn a melody, they first rely on “language” to read the notes on the sheet, which then helps them guide the hand on the keyboard. At this stage, they rely on explicit cognition in multiple ways: by processing the notes through language, and by deliberately positioning their hands on the keyboard.
ReplyDeleteWith practice, the pianist learns to play at faster pace, meaning there is less time to deliberately process which notes come next and how to place the hand to play them. It becomes increasingly difficult to rely on explicit cognition, so the pianist relies on implicit cognition, primarily muscle memory. Playing using muscle memory is relying on what “feels” right — which hand position “feels” like it will play the melody that I’m supposed to. This relates to the “hard problem” which questions how we feel the things we feel, because it’s hard to tell exactly how and why this hand position feels like the right one. At this stage of learning, it can be very difficult for the pianist to play the same melody but slowly, since it forces them to rely on explicit cognition again, which decays without regular practice.
PS: This helps explains why some musicians blank on stage and forget the rest of the melody they’ve spent months practicing. Personally, the moment I ask myself “what notes come next?”, I feel an inner “switch” and my muscle memory disappears. I’m forced to rely on explicit cognition and name the next notes in my mind, until I somehow “switch back” to implicit cognition again. That’s part of the reason why music professors will advise students to regularly practice the same melody both slowly and quickly, so that in case of a “switch” to explicit cognition, you can finish the performance.
Anaïs, very good account, but the explicit/implicit distinction does not align exactly with the felt/unfelt distinction, although it is often correlated with it. And it certainly does not solve the HP (of explaining how or why we FEEL at all rather than just DO).
DeleteWe didn’t talk about selective attention much this year, but consider that many activities are going on in the brain at once, and the focus of the activity can change (e.g., from what’s on my left to what’s on my right in my visual field) just as my movement can. There is no explanation of why a selective shift in attention (or intention — or movement) has to be FELT, rather than just DONE, zombily.
This is also why it is a misunderstanding of category feature learning and grounding to think that you need to FEEL the “taste” of an apple to learn to ground the category. Unfelt chemical detection would do the same job. “Neural nets” need not feel; they need only detect features of their sensory input. Robots too,
(It is not even clear why a Chinese T2-passer needs to FEEL what it’s like to understand Chinese: Why isn’t just talking the talk and walking the walk enough? Yet Searle correctly reminds us that it does feel like something to understand, and that that is completely missing when he executes the T2-passing program. Therefore cognition is not (just) computation.)
I might have missed this information but has the format/instructions for the final essay been released yet? When is the deadline?
ReplyDeleteI think we decided it would be on the weekend after the last lecture, with a long time to respond (but enough time for me to hand in the marks in time).
DeleteHi, I don't know if I missed it --I know that there hasn't been an email sent but-- is there any updates about the format/instructions for the final essay and its deadline? Also should we send our skywritings now?
DeleteFrom my understanding, it will be released on Monday with all the instructions and due on the 21st. We will be given a graph of important concepts and will have to describe the concepts as well as their connections to one another.
Delete32. In trying to wrap this semester’s concepts together, I find myself returning again and again to the idea of certainty, and the question of when and why can we be certain that something is what we think it is. Certainty of feeling, as Descartes describes it, asserts that our observations are not trustable, and we can never, in fact be certain. But as we bring that around to our discussion last week around I wonder if there is any way, with Descartes’ certainty, to have a confident understanding of cognition, or is our only certainty that we can never know? Namely, in the case of cognizing individuals (by which I mean those who are capable of acting differently and appropriately to different categories of stimulus, as per ‘To Cognize is to Categorize’), the ability to be certain that another is cognizing eludes us, no matter how much thought we throw at the problem (OMP). I think that one thing I have grown to appreciate much more over this semester is how semantics bring us even further from any sort of certainty, as being careless or imprecise with language when talking about cognition leads to the rise of weasel words, and makes an already large challenge seem impossible. Developing any sense of certainty on the matter of cognition mandates that we are precise in our language, and DO get bogged down in the semantics, and that we do what we can to identify our own limits to introspection, and through that an appreciation of what we can and cannot know, to avoid overstating claims and asserting certainty where it is not justified.
ReplyDeleteHi Madeleine, I agree that this semester has emphasized the importance of using precise language and avoiding what we refer to as "weasel words". Moreover, distinguishing why these words are "weasel words" has shed light on ways people often avoid answering the real questions of cognitive science. For example, a word like "representation" is "weaselly" because it is homuncular, meaning that it implies some little man in our head is orchestrating our thoughts/feelings. This leads to useless explanations because we then have to explain the causal mechanisms in this little man's head which give rise to his functions. This particular example of the word "representation" was interesting to me, because I so often encounter the term "mental representations" in neuroscience and psychology, and now see ways in which these scientific writings may avoid the "black boxes" of cognition.
DeleteDistinguish between what is probably true and what is certainly true. All scientific truths are only probable, not just those of cognitive science. And they are all undetermined. The OMP (determining whether and what others' feel) adds another layer of uncertainty to underdetermination, but it still remains a matter of probability.
DeleteI think it's not so much a matter of "precise" language as simple, kid-sibly language...
Could anyone explain to me what "vanishing intersections" relates to? I seem to have missed that one.
ReplyDeleteHi Adrienne, I believe vanishing intersections was mentioned in the 'To Cognize is to Categorize: Cognition is Categorization' reading. The "vanishing intersections" problem is a challenge to the idea that categories can be learned or evolved based on shared sensory features. The problem arises because when we look at the sensory shadows of different things, we often find that there is no single feature or set of features that is shared by all members of a category. This makes it difficult to identify what it is that defines a category, and suggests that some categories may be innate or hardwired in the brain rather than learned through experience.
DeleteUncompleted categories ("Layleks") refers to a category where one is only exposed to members of the group, but is never exposed to non-members. Therefore, it is impossible to determine what features are important to distinguish members of this category from others, because one is never exposed to feedback which would illustrate the unifying features of the members of the category and the features that differentiate members from non-members. For example, UG is an example of uncompleted categories--children are never exposed to what is not UG (they lack negative evidence).
DeleteExactly. But it's uncomplemented, not "uncompleted." What does "Laylek" (Lélek) mean? and why is it uncomplemented?
DeleteLélek means "soul" in Hungarian, which is uncomplemented, because soul is just a weasel-word for feeling, and we don't know what isn't a feeling. Our whole lives we have done nothing but feel and can't imagine what a non-feeling would be.
DeleteDoes someone mind explaining the concept of Blind Watchmaker? Does it have anything to do with intelligent design?
ReplyDeleteYes, the Blind Watchmaker is a refutation of the intelligent design argument. The idea behind the original argument is that if you find an object whose form seems unusually suited to its function in a way that couldn't have happened by chance - like a watch - then you would logically infer that someone - like a watchmaker - intentionally designed that object. Applied to the adaptive nature of biological organisms, this argument concludes that these organisms must have been designed by an intelligent agent, who was consciously working toward a goal. Evolution is a blind watchmaker because it's a mechanism by which form can be developed to fit function without intelligent design being involved. Yes, the evolutionary process is working toward adapting the form of an organism to its function (survival and reproduction), but it's not doing so in a conscious or systematic way. Instead, random mutations in the genetic code are like "blind" jabs at improving the organism, because they may or may not improve its chances of survival and reproduction. Progress is made because the random mutations which are advantageous to survival naturally stick around, and over time they build up.
DeleteEvolution is blind variation in genetically coded traits and selective retention: but the selective mechanism is also the blind one of whether the genetic trait helps or hinders survival and reproduction. If it helps, it's still there in the progeny and gene-pool in the next generation. If not (because it didn't survive or reproduce), it's gone.
DeleteShould remind you of trial/error/feedback learning...
Hi guys! In class we briefly talked about the Strong Church/Turing Thesis and how it’s similar to language. We talked about how saying that a cat is on the mat is not the same as a cat actually BEING on the mat. This part I understand, but I was wondering if someone could clarify similarities/differences between the Strong Church/Turing Thesis and language because I am not sure I understood the explanation provided in class.
ReplyDeleteHi Lili! I’m also having some trouble understanding the differences between the two, but I think I can say a little bit about how they’re similar. The professor presented language as an alternative to computation in terms of modeling/simulating things, like how you wrote “the cat is on the mat” in a natural language. In my understanding language is just a different option for fulfilling the Strong Church-Turing Thesis, since we can use language to simulate anything computations can.
DeleteAs an aside - does the mental image I get of a “cat on a mat” when I read the sentence count as the simulation?
Sorry if I answered your question with more questions! Hopefully other people will chime in.
You can simulate/model a cat on a mat computationally, or describe a cat on a mat verbally. Neither of these IS a cat on a mat. One encodes the features of cats on mats computationally, the other encodes the features verbally. Both do it approximately, not exhaustively, but the approximation can be made as close as we like, by adding more features (parameters).
DeleteComputability is like describability, and computations can be encoded verbally.
Hi professor, I'm still slightly confused about the difference between language and computation in this case. Are the two direct equivalents in this situation? I had previously thought that computation was a type/subset of language but at your talk yesterday you seemed to suggest otherwise. Thanks in advance!
DeleteComputation is a purely formal subset of every language. 2+2=4 is a sentence in English, French, Chinese, and even Hungarian. So is the algorithm for calculating the roots of quadratic equations. But maths does not use the meaning of the math symbols (if there is one): just the shape.
DeleteHi guys! I was wondering if someone could briefly explain how the kernel, core, and MinSet relate to one another? I know we went through it but I'm having some trouble understanding my notes. Thanks in advance!
ReplyDeleteA dictionary is a set of words in which every word is defined by words that are themselves in the dictionary.
DeleteThe Kernel (a dictionary within the dictionary) is what's left when you remove every word that does not define any further words, but can be defined from the words that remain. So the Kernel can define inward as well as outward (all the rest of the dictionary).
The Core (which is also a dictionary) is the “biggest strongly connected subset” of the words in the Kernel. There is a definitional path to and from every word in the Core to and from every other word in the Core. The Core can define inward, but not outward.
The Minsets are the smallest number of words within the Kernel that can define all the rest of the words outward (all the rest of the words in the dictionary).
A dictionary has only one Kernel and one Core.
But there are many MinSets in the Kernel, all the same minimal size. All are able to define outward. But none of them is a dictionary: Why not?
I'm not sure if this is correct, but is the reason that no MinSet is a dictionary just that it would take us way too long to communicate anything when you could simply use derived words (as well) where the meaning of each word is a combination of a bunch of words in the MinSet, which is much (MUCH) more efficient? Therefore no natural human language dictionary would contain only MinSet words; every language uses these derived words as well. (I feel like this is at least somewhat wrong because wouldn't this make the Kernel and Core not dictionaries as well... what am I missing?)
DeleteHi Jordan, I think it might be simpler than you're making it. From my understanding, a MinSet isn't a dictionary because it can only define all of the words outside of itself. If it could define the words within it, too, it would just be a Kernel. Since a dictionary is just a set of words in which every word is defined by words that are themselves in the dictionary, a MinSet couldn't be a dictionary because if it defined all of the words in the dictionary (ie. within itself), it wouldn't be a MinSet anymore, and would just be a Kernel.
DeleteHi guys! After reviewing the terms list I’m still a bit confused about underdetermination. From what I’ve gathered, this is the problem where a hypothesis/theory can’t be sufficiently explained from the information available in the environment, or, you can’t use the amount of available evidence to support one view or another. I think this relates to Chomsky’s poverty of the stimulus argument, with the UG proposal as a way to respond to the problem of underdetermination of environmental evidence.
ReplyDeleteIf I’m connecting the term to the wrong unit or I’ve defined it wrong please let me know!
All empirical hypotheses are underdetermined: they are merely approximate and probable, not exhaustive and certain, like a maths proof. But the approximation can always be tightened, both by adding more sensory feature-detectors, and by enhancing the computational simulation or the verbal description ("1000 to 10,000 words").
DeleteYes, this is related to POS, but in the special case of UG, it is more radical: It's not that you haven't yet sampled enough positive and negative examples (members and non-members) to learn their features by trial and error: You have not sampled any negative examples at all (unless you are an adult MIT linguist with an "ear" that can detect UG errors, because UG is already innately encoded in your brain -- OR because of some sort of universal constraint on the nature of verbal thought).
When looking at the keywords, I realized I didn't know what the distinction was between mental states and felt states. My first instinct was to assume that not all mental states are felt states, but if thinking is felt, shouldn't all mental states be felt states? Are there mental states that are unfelt?
ReplyDelete"mental" and "mind" are weasel-words. All "mental" states are felt states and all felt states are "mental." (To have a "mind" is to have the capacity to feel.)
DeleteProf Harnad, in every one of my skywritings, I try painstakingly to avoid weasel words, a practice influenced by your teachings! However, I'm not sure if someone made this comment already, but I can't help but notice the presence of one such word in our course title... "Categorization, Communication, and CONSCIOUSNESS." Consciousness is one of your least favourite weasel words as it's just a fancy word for feeling. Although the name CATCOMCON is quite iconic and CATCOMFEEL does not have the same effect, the rationale behind its inclusion intrigues me!
ReplyDeleteMiriam, you are absolutely right! "Consciousness" is a weasel-word.
DeleteAnd it is ironic (and perhaps a little confusing) that it appears in the name of the course.
The course has evolved across the years, but my addiction to alliteration is not the only culprit.
I've thought about changing it, but although "feeling" is not a weasel word, almost everyone (until they’ve taken this course) assumes it just means emotion. And although emotion is indeed a felt state, it's only one of many kinds felt states (sensation, warmth, hunger, meaning, understanding, thinking) that, before taking this course, most people don’t realize are felt states too .
So I have left the C-word in the title, and then taught in the course why it's a weasel-word. Maybe I'll put it in scare-quotes -- ...and "Consciousness" next year.
What do others think?
I think that it's great that Consciousness is in the title of the course for the reason Prof. Harnad gave and also because it lures in people who think that Consciousness is something beyond and more mystical than simply feeling (such as myself a few months ago), which are the people who most need to take this course, to learn that that is not the case.
Delete1. A la my classmates, I’m going to write out my understanding of some of the topics I feel a bit fuzzier on, to test my understanding. Broadly, approximation refers to the process of becoming more accurate in categorizing. We talked about approximation in two respects: First, computational approximation, which can be maximally improved until there is a certain and precise grounding of numbers, theorems, and axioms where they have a single, stable meaning. In mathematics, approximations can be reduced to zero. Second, language approximation refers to the fact that the words we use are necessarily ranging in scope, and depending on the purpose our language is carrying out, we may wish to be more or less precise in the level of detail we convey. e.g, I can call our school McGill, or university, or establishment, but if what I want to convey is an idea specific to McGill and not other universities or establishments, I need a less approximate term. That said, there is no promise that a word carries the same meaning from person to person, and the approximation in communication /between/ individuals creates variability and unclarity in categorization. An optimal language would be one in which all speakers share the same definitions for each word, as information would be communicated in the most efficient way under these circumstances. A key difference between mathematical and language approximation is that mathematical categories can be exact, without any room for approximation, which is why we can have mathematical certainty but not certainty in language. Let me know if I’m getting this wrong, or if anyone has anything to add! Thanks!
ReplyDeleteApproximation: see above.
DeleteDon't mix up (1) computatioon in maths (theorem-proving, problem-solving), which can be exhaustive and exact, hence certain, with (2) computational modelling in empirical science (e.g., of gravity, or an ice-cube) which approximate and a matter of probability on the available evidence so far.
For language: propositions describe features. They are finite in length. But you can always lengthen them with more features.
Don't mix up approximation/underdetermination with polysemy (which is that words can be ambiguous, or can have multiple meanings, or a different grounding ((MinSet) for you and for me.
About "optimality," see reply below).
Hi guys, while reviewing my notes I wrote down s/s (spiders/sex) many times when we did evolutionary psychology. However, I am having a hard time understanding what that really refers to now… Could anyone clear this up?
ReplyDeleteHi, I understood it to represent the main evolutionary pressures, which are to avoid anything harmful and to reproduce. I understood it to be related to other concepts such as the Hard Problem in that s/s provides an explanation for many things such as behaviors or our anatomy, but when it comes to understanding where feelings come from, the s/s explanation does not really help. It helps in explaining nociception, but not the actual feeling of pain.
DeleteThank you Omar, that really cleared things up, especially its connection to the Hard Problem.
DeleteComputational models of the physical world (ice-cubes, rockets, gravity, T3 cognitive capacity) are approximate.
DeleteBut computation within mathematics (applying an algorithm to calculate the roots of a quadratic equation, proving theorems on the basis of deduction from formal axioms), is exact, because it is just formal, not empirical, like scientific modelling.
“An apple is a round, red fruit” is approximate and revisable. It could be made more and more exact by adding 10,000 more features. It could even turn out to be wrong for certain future exceptions we discover (are quinces apples?).
“An even number is a number divisible by two” is a definition, Just as “a bachelor is an unmarried man” is. (These are examples that have been done to death by philosophers.)
Think of approximation as the features distinguishing the members from the nonmembers of a category. Neural nets, and scientists, have to find the features, and they can always turn out to have been wrong, or not enough. But in maths (tautologies) they are just a formal agreement on how to use symbols.
(I think language is "optimal" enough; it's knowledge that's wanting. There's always room for 10,000 more words.)
From my understanding the Whorf Hypothesis states that our different languages influence the way in which we experience the world. For example, a particular language may make the distinction between two shades of blue and have commonly known names for them, therefore speakers of this language will know this difference. In the list of words there is the “Whorf hypothesis (Strong vs Weak)”, I am not sure I understand the strong vs weak aspects of the hypothesis. Can someone explain this?
ReplyDeleteHi Maria! The strong/weak Whorf hypothesis was discussed in Week 6b. From my understanding, I think that the weak Whorf hypothesis is closer to the example that you provided, where language influences how we experience and categorize the world. The strong Whorf hypothesis is that language completely shapes how something is perceived. For example, the strong Whorf hypothesis would say that we perceive colours as distinct, and categorically separate, in the rainbow because colours are named categorically.
DeleteShona's reply is right. And learned CP is an example of Weak W/S. How?
DeleteLearned categorical perception, in the context of the Whorf Hypothesis, states that language learning influences the way individuals perceive and categorize stimuli. Learned categorical perception can be considered an example of the weak Whorf hypothesis, where language plays a role in shaping categories but doesn't impose rigid and innate boundaries.
DeleteHello, apologies for all my questions… I hope that they can also help clarify things for others. Just while going over the list of words, I am unsure what “system reply” means in the context of this course. Is anyone familiar ?
ReplyDeleteHi Maria! The Systems Reply is linked to the symbol grounding problem. It was a reply to Searle's Chinese Room argument, saying that even though Searle/the individual in the room may not understand the characters he's manipulating, the "system" he is part of (the room, the rule ledger, the data banks of Chinese characters, etc) DOES understand.
DeleteSearle's response is to allow the individual to incorporate all the above components, ex. by memorizing them. Even so, he still doesn't understand the Chinese symbols, and therefore neither could the system.
If you want to look at the specific reading it's Searle's "Minds, Brains, and Programs" around page 5. Hope this helps!
That's it. But Searle has to not only memorize the Chinese T2-passing algorithm, but execute it while being T2-tested, yet able to tell us in English that "while I do that, I do not understand the symbols."
DeleteHi everyone, I'm still struggling a little with the concept of Induction vs. Instruction, could someone clarify overall how this applies to this course in a kid-sib way :)
ReplyDeleteHi Mal! I think that the concept of instruction vs induction is covered in week 8/9 (especially the Blondin-Massé et al paper). As I understand it, this concept applies to the course in that it refers to a distinction within categorization, where categories can be learned through induction (direct experience) and instruction (through language). I think that this distinction highlights the importance of language for how we are able to do what we are able to do, and how advantageous it was to evolve to use language the way we do. The Blondin-Massé et al paper demonstrates, through an artificial life simulation how categories are learned through induction vs instruction, and the advantages that being able to learn categories through instructions provides. The paper also highlights that there are some instances where learning through direct experience (induction) is more beneficial than learning by being instructed by another. Therefore, I think this concept of induction vs instruction relates to the course in that it is concerned with the symbol grounding problem (there must be some induction, before instruction can occur), the evolutionary adaptiveness of language, and the evolution of language (as moving from showing to telling).
DeleteYes, induction is learning to categorize directly by learning -- through sensorimotor trial and error supervised learning -- to detect the category's distinguishing features. Instruction is learning the features indirectly by being told. But the names of the features already have to be already learned, named, hence grounded categories.
DeleteBased on what we discussed in class, I consulted chatgpt on why the hard problem is "hard", as has always seemed a mystery:
ReplyDelete1. Subjective Nature: Consciousness is inherently subjective, making it difficult to objectively analyze or measure. Unlike observable behaviors or neural activity, subjective experiences cannot be directly accessed or quantified by external observers.
2. Lack of Physical Explanation: The hard problem challenges us to explain how physical processes, such as neural interactions, can create subjective experiences. Current scientific methods are adept at explaining physical phenomena but fall short in explaining how these give rise to subjective experiences.
3. Qualia: This refers to the individual instances of subjective, conscious experience, like the way it feels to experience color. Understanding how and why these qualia occur from physical processes remains a significant challenge.
4. Explanation Gap: There is an explanatory gap between our understanding of physical processes and the emergence of subjective experiences. While we can describe and increasingly understand the workings of the brain in detail, this knowledge doesn't straightforwardly translate into an understanding of consciousness.
To put it in my words, it would be more due to a lack of negative evidence for feelings. The importance of causal explanations in understanding cognitive processes has been emphasized throughout, which points out the limitations of behaviorism and highlights the need for reverse engineering to uncover the mechanisms behind observable behaviors (EP). Yet most papers being referenced either suggest correlations to why we have feelings (since they could not be manipulated), or question even the existence of feeling / the HP. The explanatory gap hence remains with little substantial contributions made.
Yes, but see other replies about (1) "uncomplemented categories" in this thread, and about (2) causal "degrees of freedom" in other threads: the solution to the EP will have used them all up, leaving none to the HP.
DeleteHi everyone, could someone explain to me:
ReplyDelete1) what the definition of a just-so story is again
and/or
2) why language as a spandrel is a just-so story?
A just-so story is just an interpretation or analogy; it is not tested (and, often, untestable) speculation. Spandrels (what are they) do not explain the evolutionary origin or adaptive value of UG.
DeleteFrom my understanding, the term ‘spandrel’ was used by Gould and Chomsky as an analogy to describe the origins of human language/Universal Grammar because they thought that language is the byproduct or side effect of evolutionary biology. This is opposed to the belief that languages directly evolved through natural selection. It is still unclear about how universal grammar came to be, therefore both of these views are just-so stories. Is that correct?
DeleteI don’t recall coming across term 120 (volition) this semester and I was wondering if anyone could explain it to me and situate it in the context of our course?
ReplyDeleteHi Jocelyn, I also don't recall encountering this term in this course, but am going to make an educated guess! I think that we require any T3 (or T2?) candidate robot to act with volition in order to capture human "doing" capacity, rather than act according to a particular algorithm or set of rules. The way that a T3 robot interacts with the world and comes to ground symbols through these sensorimotor interactions is all voluntary, perhaps even exploratory, behavior. On the other hand, a robot that is given exact instructions of when to move its hand or pick up an object does not act with volition, and would fail at capturing human behavior (thus getting us no closer at our goal of reverse engineering human's "doing" capacities). Not sure if this makes sense/is at all correct...
DeleteI had the same question. To be honest, "Volition" sounds a bit like a W-Word, easily replaceable with "will", "choice", or even "consciousness" (though maybe that's due to the weasel wordiness of "consciousness"). Jessica, I think you're right about the definition of volition as doing things voluntarily, being able to make one's own choices, but the significance of it is still lost on me... Whether or not a robot, or even a person, has "volition" would just be a reformulation of the OMP, right?
DeleteMaybe it could be understood in the context of active learning?
Jessica I reread your comment and realized this might have been what you were already saying with your robot-learning examples
DeleteI had an idea for next year's course while thinking about the thought experiment regarding a student being a T3 robot, and we wouldn’t be able to distinguish her from a human. It would be an interesting experiment to create a fake profile on the blog from which the comments are generated only by ChatGPT and see if the students notice it. That would be like a pen-pal Turing Test during course time. With ChatGPT’s impressive progress and good prompting, I’m sure most students would be fooled. I know it requires some time, so maybe a TA or a student could do that job. When I had that idea, I was persuaded that Professor Harnad would have already done it, so I checked the comments and the profiles, but I didn’t find a suspect one.
ReplyDeleteI know what absolute VS relative judgement means but I don't remember in what context we discussed it in this class. I also don't remember talking about anthropomorphism.
ReplyDeleteWe have studied the concept of relative vs. absolute judgments during week 6. I remember learning about this distinction while reading the 6a article “To Cognize is to Categorize: Cognition is Categorization”. Categorization can be seen as an absolute judgment (since it’s based on identifying an object in isolation) but it is also relative (in the sense that what invariant features need to be selectively abstracted depends on what the alternatives are). Sections 10, 20 and 23 might help you.
DeleteIf I’m correct, anthropomorphism was studied in the context of animal sentience. The term was mentioned in the 11a article where Bryan explained that we should not attribute to fish the ability to feel pain simply because they attempt to escape the noxious stimuli (because they have detectors) and by Professor Harnad in the 11b article when he mentioned that the dictates of our mind-reading abilities are easily dismissed as “anthropomorphic” illusions when there is a financial, personal, or scientific interest behind it.
Hope this helps Jocelyn!
This term could be understood as: how do we know whether animals or so-called lifeless objects, like rocks or trees, don’t feel? And of course it is related with OMP. I suggsuggest we should understand it under the content of ChatGPT. When havin conversations with it, we feel like it is "talking" rationally with us; thus it must be cognitive. But this is simply an illusion brought by our mirror capacity, and there should be no difference between kids think a rock could talk if they see the rock has drawn eyes and mouth.
DeleteI think that's correct! "Stevan says" that Turing would have been in favor of a weak equivalence solution to the easy problem, in that if a robot has the same doing equivalence as a human it could pass the Turing test, regardless of if the mechanisms for doing so are the same as those that humans have. For Turing, weak equivalence is enough.
ReplyDeleteAfter our discussion in class yesterday, I am still somewhat confused about Baldwinian evolution. I understand that a Baldwinian trait is one that handles a capacity to learn, but I do not quite understand how that is a type of evolution in the same way that darwinian evolution is. I would think that baldwinian traits would be selected for because of the adaptiveness of the ability to learn rapidly, in the same way that all traits are. In other words, wouldn't baldwinian traits still be influenced by darwinian evolution? My apologies if my question is not quite clear.
ReplyDeleteI think it's considered a 'different type' of evolution compared to Darwinian evolution because despite it being at trait that is passed down (which is evolution), it's the ability to learn to do a certain thing that is passed instead of the behavior or structure itself being passed down. I wonder if it's right to say that it is a subset of Darwinian evolution in that regard in that it is a different kind of trait getting passed down.
DeleteI think the only extent to which Baldwinian evolution is a subset of Darwinian evolution is in that BE selects for general learning ability in a Darwinian fashion (because BE depends on learning). But Megan, I think you're right that Baldwinian traits would still be influenced by DE, the two don't seem easily separable.
DeleteBaldwinian evolution does indeed involve the ability to learn, but it's considered a distinct from Darwinian evolution because it focuses on the transmission of the ability to learn specific behaviors or skills rather than the direct inheritance of those behaviors or skills themselves. It's like inheriting the potential to learn a language rather than inheriting the language itself.
DeleteHowever, you're correct that the two are intertwined. Baldwinian traits can still be influenced by Darwinian evolution, as the capacity to learn rapidly could provide a survival advantage in certain environments.
From my understanding, multiple realizability refers to the idea that a particular outcome can be reached in many different ways or by different processes. I believe it is heavily related to weak equivalence, which occurs when two devices provide the same output when given the same input, however, their algorithms or way in which they compute the output is different.
ReplyDeleteFrom my understanding, Universal Grammar is a set of innate fundamental rules/similarities that apply to all languages. However, I am still a bit confused on this topic, can we give an example of what one of these rules might be?
ReplyDeleteHi Maria, that is one way to define UG, but I think a better way to look at it from the perspective of the course is as follows: Children are able to learn language, i.e., which sentences are gramatically allowed and which sentences are not gramatically allowed, even though they have no negative evidence, i.e., the sentences that are not gramatically allowed. This is because they never hear sentences that are not gramatically allowed, e.g., "Bill said he would do so the window on the second floor.". Because of this, children cannot distinguish from evidence the difference between grammatical sentences and ungrammatical sentences, yet they produce their own novel grammatical sentences all the time. The fact they are able to generate new sentences that follow these rules despite never learning what these rules are suggests theat the rules must be already there, inborn in all humans. The "fundamental rules/similarities" are just whatever rules distinguish grammatical sentences from ungrammatical sentences, what they are specifically is not important for us, just that they are always obeyed and never violated.
DeleteThank you for that explanation Jordan, it has cleared things up! However, I can't help but think that these "innate rules" are more just imitation... The fact that children are not exposed to many breaks in grammar rules would make me think that they are just modelling their parent's use of the language and that there is no inborn grammar.
DeleteHi Maria,
DeleteI get what you're saying about how children could simply mimic their parents' speech. But linguists, especially Chomsky, say that sentence structure (syntax) is too complicated for kids to learn just by imitating. The idea is, even though kids only hear a few kinds of sentences, they still create new, correct ones they've never heard. Chomsky and others who support Universal Grammar think syntax isn't just about copying—it's about having a deep, natural sense of the basic structure that all languages share. Chomsky believes that learning syntax cannot be only attributed to mimicry or learning. The argument is that there's an inborn structure helping them understand and make the intricate patterns in language.
I have a couple terms from the list that I was hoping to get some clarification about. I am still unsure what the dictionary-go-round is, as well as the explanatory gap
ReplyDeleteHi Zoe! From my understanding the dictionary-go-round refers to the idea that if an English speaker had let’s say a mandarin dictionary, they would attempt to look up a word, then this would send them to look up other words. Essentially, they would cycle as they do not speak mandarin and cannot make meaning out of arbitrary symbols.
DeleteHi Zoe! From what I've learned in my philosophy of mind class, the explanatory gap is the difficulty (central to the mind-body problem) we have in explaining how the body/the physical gives rise to our felt states. Assuming that feeling is an effect of physical brain events, the gap has to do with explaining its causality (the how and why of the HP). Hope this helps!
DeleteHi Zoe! From my understanding, the explanatory gap just refers to the gap between physical (observable) processes and our subjective felt states. It was first coined by Levine (1983), and it basically suggests that even though if we can understand all about physical processes and properties, we are still left with the question of how do physical processes give rise to the subjective quality of felt states. This gap would be closed if we knew the answer to the Hard Problem.
DeleteI think Maria is right. Multiple realizability is just many ways to get to one output, such as different paths you take to school will take you to the same location. The professor said, "shit happens, but it happens differently." For example, fish and humans both feel pain, retract from harmful stimuli, even though we differ in neurophysiological and anatomical systems, it still generates feelings of pain.
ReplyDeleteWe discussed vanishing intersections in class yesterday, however I am not sure I understand this topic and am having a hard time making sense of the notes that I did write down. Could anyone explain this concept to me?
ReplyDeleteI read the discussion posts on vanishing intersections and the "To Cognize is to Categorize" reading and was confused in class too, so I would also appreciate it if someone could very kid-sibly explain this!
DeleteThe Vanishing Intersections argument says that categories cannot be learned OR evolved, because it seems impossible to find consistent features in the "sensory shadows" (as I understand it, "sensory shadows" basically means our perceptions of stuff and ideas that are a bit removed from the world because there is the middle-man of perception. "Retinal Shadow" of visual stimuli is a more literal example.). The example in Harnad's paper illustrating this difficulty is trying to find the similarities between the sensory shadows of "beauty", "truth", and "goodness". So when searching for "invariance" (consistent features) in categories, the intersection of sameness is supposedly empty, so categories must be innate in some sense.
DeleteBut as Harnad points out in "To Cognize...", we do still have the capacity to categorize, and need to account for that ability, so we should reject that they're innate and inexplicable and assume that we can, in fact, categorize.
This comment has been removed by the author.
ReplyDeleteHi ! Could someone explain to me what are the differences between approximation in categorization, computation and language ?
ReplyDelete34. One of the other challenges that I’ve enjoyed trying to wrap my head around during this class is how and whether we can confidently link components of cognition (e.g, decision making, motivation, memory) to discrete neurophysiological elements. To me, one of the central issues facing neuroscientific research today is understanding the way the brain organizes itself, and the level of analysis that matters for different processes (if one can be causally implicated at all). I think that mirror neurons are an interesting example of us identifying a very precise and replicable pattern of activity but being unable to carry it much further. Although research on mirror neurons ultimately haven’t gotten us much closer to understanding any neural substrates of cognition, I don’t think it represents a complete loss for neuroscientific research. In fact, I think the continued mystery of HOW MNs do what they do illustrates that we’re not looking at the right level of analysis, at least not always. A growing body of research looks at population dynamics – rather than trying to correlate single neuron firing to cognitive functions, instead taking the aggregate activity and seeing how it evolves in high dimensional space (a space represented by as many axes as there are meaningful patterns of activity). This can give us new perspectives on activity, and clarify correlations and indeed causal relationships between neurons and behaviour that are otherwise unidentifiable.
ReplyDeleteHi! Could anyone go over the concept of lazy evolution again? I think I got the general idea (evolution is a mindless process, relying on the environment, that only cares about satisfying not optimality ; and the fact that it's not telling us how and why we can think and feel but only if we do) but I'd like more clarification or a more kid-sib explanation. How does it relate to the imprinting process we mentioned and why was it said that the symptoms of evolution's laziness are learning and language? Thank you :)
ReplyDeleteHi Mamoune! Lazy evolution is about providing the tendency or motivation to learn, rather than specific knowledge itself. It's like Darwinian evolution equips us with the tools and a strong desire to figure things out, but doesn't spell out all the answers. This is especially clear in the case of language learning. We're not born with a built-in language but instead, we have the innate motivation to learn language, a perfect example of such 'lazy' aspect of evolution. In contrast, all learning can be seen as Baldwinian — what we learn isn't hardcoded in our genes, and that it does not account for our capacity and eagerness as factors. I hope my interpretation helps.
DeleteI would like to share my thoughts after chatting with ChatGPT regarding the cognition and computation, which contains weasel words. ChatGPT suggested that cognition and computation share a similarity in "symbolic representation," implying that both involve using symbols to represent information. However, this statement might be unclear and confusing, as the term "representation" can have different meanings in various contexts.
ReplyDeleteHey guys, I know this may be quite a late question, but after the class on Friday, I am still kind of confused with the poverty of stimulus that we went over in class. I took the note down that these are uncomplemented categories, and that the one that obeys ordinary grammar is a positive evidence but the ones that violate it are negative evidence (which is what we are lacking right now), but I am quite lost on how these concepts are all tied in together?
ReplyDeleteUnder my understanding of referents, the emphasize of grounding process is not about whether there is a referent corresponding to the physical world. The importance of sensorimotor system, is that the system plays an essential role in first language acquisition, which later language learnng involves more indirect grounding with more abstruct "meaning" of symbols.
ReplyDeleteIt is also not important that whether the word is iconic or symbolic, what matters are those features, or referents, making cognizors be able to distinguish members and non-members.
My understanding in terms of language and computation is that: the syntax part of language is computation, which is performed well by ChatGPT as well. And for semantic part, it includes direct grounding process with its referents; this part is also achievable by T3 robots. Additionally, the feeling that we feel we understand the language plays the HP part in understanding of language, and there is so far no way to solve it, because of the barrier from OMP.
ReplyDelete