Monday, August 28, 2023

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press 


Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

162 comments:

  1. Hi,

    Could someone please explain this passage to me (page 10, last paragraph)?

    The root of the problem is the symbol-grounding problem: How can the symbols in a symbol system be connected to the things in the world that they are ever-so-systematically interpretable as being about—connected directly and autonomously, without begging the question by having the connection mediated by that very human mind whose capacities and functioning we are trying to explain? For ungrounded symbol systems are just as open to homuncularity, infinite regress, and question-begging as subjective mental imagery is!

    I also do not understand the symbol grounding problem for the life of me. I looked it up and according to Wikipedia, ‘[…]the symbol grounding problem is concerned with how it is that words (symbols in general) get their meanings, and hence is closely related to the problem of what meaning itself really is.’

    If I take the word ‘tree’ as an example, I suppose that someone, at some point, pointed to a tree and decided that we would refer to that *thing* as a *tree* henceforth and that is how the word ‘tree’ got its meaning. So, I don’t understand why we are asking ourselves how words in general get their meanings. I can see that this is pertinent when we are talking about abstract terms; we indeed cannot define abstract terms without using more words. But then again, how else would we define abstract terms?

    In short, I fail to see how the symbol grounding problem is the root of the problem that is discussed in the paper and I also do not understand the symbol grounding problem.

    ReplyDelete
    Replies
    1. I think the symbol-grounding problem has to do with how we associate an object’s meaning with a sensory experience. As humans, when we see a tree, we go through a variety of sensory experiences. We know what it feels like to see it or touch it. We can imagine the bark to be rough and hard, and the leaves to be soft and supple.

      Meanwhile, when you give the meaning of “tree” to a machine, it may be able to associate the image of a tree with the word, but it may not be able to go through the rich sensory experience that we do. It may be able to learn that a leaf is “supposed” to feel soft and supple, but how can we make it go through the sensory experience itself? Because can a machine truly be cognizant if it doesn’t?

      Overall, I think the symbol-grounding problem has to do with figuring out how we can associate the simple word tree to those sensory experiences “directly and autonomously” as stated on page 8. Nobody tells us that touching the bark is supposed to feel rough. We come to that conclusion ourselves through sensory experience, and then associate the sensation to the word “rough”.

      The heart of the problem might be to figure out how this connection was made possible in our brain, and how we can program it in a machine, perhaps in a newer “scaled up Turing Test.”

      I found this reading to be challenging, so this interpretation may be wrong. Please feel free to correct me.

      Delete
    2. I think another useful comparison is to Searle's Chinese room argument. Basically, Searle argued that a purely computational model of cognition is insufficient since it fails to capture how we our mind is able to connect symbols to their meaning. For example, if you were to perform the Turing Test in Chinese, you would learn to apply specific rules in response to specific (Chinese) symbols. Because the symbols are in Chinese, you would have no idea what the actual meaning of the symbols are, but you would be able to complete the Turing Test and apply each of these rules nonetheless. However, this is not how our minds actually work - we can grasp the meaning or the sense of things (i.e. like Anais pointed out) and the idea is that a computational model of cognition does not account for that.

      Delete
    3. AAshiha, the quoted passage refers to the fact that Pylyshyn had rejected mental imagery as the explanation of cognition because it depends on a “homunculus,” a little person in the head that sees and understands the images you see when you’re introspecting. Pylyshyn thought he could get rid of the problem of the homunculus by replacing images with propositions. But propositions are just sentences in the head. You still need a homunculus to explain how we understand them. So next Pylyshyn suggested that there were computations being executed in the head instead of propositions. Computations are symbol-manipulations, according to rules (algorithms), that can be executed by a computer. So cognition is computations in the head.

      Trouble is that computations are just symbol-manipulations; they can be executed by a machine, but they still have to be interpreted by a user. Back to the homunculus again.

      There are no doubt images, propositions and computations going on in the brain. But that is not a causal explanation of cognition or cognitive capacity unless there is a way to connect them to the things in the world that the images, words, propositions and computations are ABOUT.

      The course will be about this.

      Yes, “tree” refers to trees. But how does that happen? You first need to recognize trees, and distinguish them from phone polls and bushes. Only once you can recognize what is and is not a member of the category “tree” can you then label it a “tree.” The labelling part is trivial, after the real work has been done.

      Anais, you’re closer to the real problem, but the solution is not “association” (which is another weasel word, just like “experience”); but learning to perceive and recognize what is in the category “tree” and what isn’t. And this requires category learning, by trial and error, with feedback when you get it right and when you get it wrong. (There are now computational models, neural nets, that can help in category learning; but there’s more to it than that; and that’s sensorimotor, not computational. It can be modelled by computation, but it cannot BE just computation. See the beginning of the discussion about the ice-cube.)

      But that will come later. Now you just need to understand what computation is, what is can do (which is a lot!), yet why that’s not enough.

      Sophie, as a doctoral student, you are somewhat ahead of the game: the Turing Test will be next week and Searle the week after. But before we decide that computation is not enough, let’s get it clear what computation is, what it CAN do, and only then try to understand what it CAN’T do.

      Delete
    4. NOTE TO EVERYONE: Please read the other commentaries in the thread, and especially my replies, before posting yours, so that you don't just repeat the same thing.

      Delete
    5. **BLOGGER BUG**: ONCE THE NUMBER OF COMMENTS REACHES 200 OR MORE {see the count, at the beginning of the commentaries] YOU CAN STILL MAKE COMMENTS, BUT TO SEE YOUR COMMENT AFTER YOU HAVE PUBLISHED IT YOU NEED TO SCROLL DOWN TO ALMOST THE BOTTOM OF THE PAGE and click: “Load more…”
             ________________
                Load more…
             ________________
                    ——
      After 200 has been exceeded EVERYONE has to scroll down and click “Load more” each time they want to see all the posts (not just the first 200), and they also have to do that whenever they want to add another comment or reply after 200 has been exceeded.
      If you post your comment really late, I won’t see it, and you have to email me the link so I can find it. Copy/Paste it from the top of your published comment, as it appears right after your name, just as you do when you email me your full set of copy-pasted commentaries before the mid-term and before the final.
                    ——
      WEEK 5: Week 5 is an important week and topic. There is only one topic thread, but please read at least two of the readings, and do at least two skies. I hope Week 5 will be the only week in which we have the 200+ overflow problem, because there are twice the usual number of commentaries: 88 skies + 88 skies + my 176 replies = 352!. In every other week it’s 2 separate topic threads, each with 88 skies plus my 88 replies (plus room for a few follow-ups when I ask questions.

      Delete
  2. In this article, I could summarize that many previous theories and discussions have fallen into the fallacy that they merely observe the products of our brain and/or mind. For example, the symbols are artificial and only have meaning once humans give them one. As the article mentioned, introspection would be the wrong way to dig out what happens within the black box, as we merely look at the products (the speeches of the subject) from it, instead of trying to open it for the real answer. Due to this, it would be wrong to consider our cognition working based on symbol systems, and what we should do is to find the more fundamental level of how our brain works while we are thinking.

    Meanwhile, I would like to mention a concept I learnt from PSYC433, since it helps me understand this article. AI could be distinct into general AI and narrow AI. The former is closer to the human being that applies their intelligence to any task of any domain, but the narrow AI is designed to solve one task or more tasks but under one domain. To relate to this article, as with Searl's sino-spoiler thought experiment, it would be easy to develop a narrow AI targeted at the original version of the Turing test, as it is just a symbol systems, and it is crucial to focus on the concept of general AI and to discover what cognition is. However, it seems like ChatGPT is also a narrow AI as it merely generates text basing on users' demands while it has a pretty huge database as the resourses for generation. I hope I understand this article especially the concepts I mentioned correctly.

    ReplyDelete
    Replies
    1. Good reflections. But both "general" and "narrow" AI are computation. What is computation? and what CAN it do, before we get to whether that includes cognition. ChatGPT, by the way, is just computation too, plus a HUGE verbal database. We'll sort out "narrow" and "general" when we get to the Turing Test hierarchy: to, T2, T3, T4...

      Delete
    2. Thank you so much for your reply! Computation, in short (probably a vague way to introduce it), is information-processing according to the optional reading of 1a. It has the algorithm, so it continues to process until the desired outputs derived from the inputs. To illustrate, ChatGPT should be a type of Turing Machine that has a extremely long tape (the huge verbal database).

      Sometimes, I am thinking about whether the most significant difference among general and narrow AI is its representatives on information or not (while the gap between them might be caused by the material them use or physical construction, the size of memory space, etc.), since a wise system of representatives could make information storage more efficient, information processing more rapid, and even enable the AI to process different tasks from different domains. However, I am not very good at artificial languages, so I could explain this deeper and even I may misunderstand some concepts. Hope these mistake could be solved in future lectures and readings.

      Delete
    3. It's probably best to think of computation as rule-based symbol manipulation rather than "information-processing" because otherwise I have to ask you what "information" is and what it means to "process" it!

      What does it mean to "manipulate symbols" according to rules (algorithms)? And what are symbols?

      Let's forget about about "narrow" and "wide" AI for now...

      Delete
    4. I found that a very interesting point (focusing on what AI can do in respect to questions such as what is cognition the way that humans do it and is it possible to convey this to a computer). It feels like certain opinions, like Searle's room argument, even though bring up fair points in that the turing test may not demonstrate true cognition, that doesn't mean that a machine that has the ability to translate text without any comprehension of the actual content is not useful. For instance, it may do this task much better or more efficiently than humans, who can then focus their cognitive effort on another task or creative process un hindered by menial work. Thus on its own such a turing machine may not have complete intelligence yet when combined in an effective manner with other human behaviors and inputs, the cooperation and group the two create may enhance human intelligence and skills.

      Delete
  3. This reading really made me think on whether we will actually ever figure out whether we will learn the “how” of our thought processes. Since we humans are looking into the how, and we can’t rely on introspection, would that not also mean that we cannot trust our opinions on anyone’s behavior? What I’m trying to say is that if we can’t rely on trying to figure the how on our own, by introspection, how can we rely on our interpretation of any other persons on any other’s behavior, would that not be subjective anyways?

    On another note, towards the end of the reading, the idea of cognition being computation comes forth. That to me would suggest that on a theoretical basis, we can create machines that would be like us humans. But it is hard to imagine that it’s possible, so there surely should be some biological/physiological aspect to the “how”?

    P.S.
    The very last paragraph (before the summary) mentions that it cannot be all computational, so I’m guessing that would somewhat agree with what I said in the second paragraph.

    ReplyDelete
    Replies
    1. I had a similar reaction to this reading. It really brings forth the theory of mind question of how we can only truly know our own states of mind and can only really sympathize with others and theorize how they might feel, but it's not something we can ever really know.
      Like you said, the reading mentions that it cannot all be computational, but this reading did make me consider that if there were some computational program or something similar that could 100% of the time respond as a human might (it would have to be a specific person, I guess, because everyone responds to things differently), and if we were able to explain every mechanism of the program and how it functioned, how can we know that this algorithm is accurate to how we actually function? We can always figure out more about ourselves through experimentation and introspection, but I find myself doubting that we will ever truly know and understand the human mind in full.

      Delete
    2. Yes, it will turn out that computation is not enough to provide a causal explanation of cognitive capacity. But for now, let's get a clear idea of what computation CAN do.

      In a sense, all creative ideas come from introspection (trying to figure things out). And the creativity is no doubt inspired by observation. But when it comes to seeking a causal explanation of what is going on in our heads to produce our cognition (thinking) and our cognitive capacity, observing what is going on in our minds is far from being enough.

      Delete
    3. I am posting this again (there was a bug last week, and apparently my reply to Zoe and Selin disappeared). Here is what I was saying:
      I had a similar reaction to the reading as both of you.
      And if I could add something that I was reflecting on while reading the text, it would be this: coming to the same conclusion that the mind is complicated to understand and might even never be fully understood, then how can we even develop strong AI supposed to represent the mind, if we are not even able to understand it by ourselves.
      Because we might try and want deeply to reproduce brain networks, neuronal connexions and how they affect our thinking. But in the end, if we are ourselves not even able to identify the functioning of our own thought processes and minds, how can we implement it on a computer to “reproduce” ourselves?

      Delete
  4. The idea that cognition is computation, symbols being categorized by rules, struck me as interesting given the advances in generative AI. I'm curious what these algorithms could tell us about how we categorize stimuli. The reading talks about how in order to name the category of "bird" we must be able to identify and process the stimuli of a heron, a duck, etc., but the rules we apply in order to do this are inaccessible through introspection. I would be curious if the rules applied to these AI programs, which could recognize a photo of a duck as a bird, tell us anything about how it might work in our brains.

    ReplyDelete
    Replies
    1. We'll get to generative AI (ChatGPT) soon enough. But what does it mean to "categorize" stimuli?

      Yes, there are computational models that can learn to identify pictures of birds as birds: Is that categorization?

      Delete
    2. This is a good question, because I would have thought that to "categorize" something WAS equivalent to identifying a picture of a bird as a bird, because that is identifying what category it belongs to. I don't know more specifically how we form these categories to begin with, but in a child development course we learned that babies develop these basic categories very early on.

      Delete
    3. Consider that categorizing is something you DO with something. And name is just one sort of thing you might do with it. Eating it is another. Petting it is another. Running away from it is another. And you have to learn what kind of thing you need to do which of these things with. Categorization is "doing the right thing with the right kind (i.e., category) of thing."

      And it's dependent on learning the features that distinguish the members from the non-members of the category (or other categories).

      Delete
    4. I have a question is response to your comment professor -- when we learn categorization, do we begin with individual objects, categories, or a mix of both? For instance, do I start by learning what eating is and then slowly learn about foods, which I then place in the 'for eating' mental category? Or do I learn about individual foods which I do eating with, and eventually form the category of 'foods for eating' as a result? Is there even a significant distinction?

      Delete
  5. I was particularly intrigued by the discussion surrounding Pylyshyn's critique and “Discharging the Homunculus" paragraph. This caught my attention because I found Pylyshyn's work interesting in my earlier courses. This part highlights the challenges associated with imagery theory and the importance of going beyond surface-level descriptions and superficial explanations of cognitive processes. To improve our understanding of cognition, he emphasizes making implicit calculations explicit and the need for rigorous and testable models. It serves as a reminder that in our effort to understand the workings of the mind, our intuitive experiences might mislead us. When we ask a question such as "Who was your third-grade school teacher?" our minds perform a hidden computational process that cannot be directly observed through introspection. To progress in cognitive theory, these computations need to be made explicit and subjected to evaluation.

    ReplyDelete
    Replies
    1. Pylyshyn was right to suggest that thinking is not explained by imagery or propositions, because both are homuncular (what's that?), and tthat computation was a better bet. (It was Pylyshyn who recruited Searle's Chinese-Room article for BBS. We'll be discussing it in Week 3.) But was computation enough?

      Delete
    2. Both imagery and propositions are homoncular because they require an interpreter or user in order to assign meaning. Within cognitive science we are aiming to dispel the homoncular model because it does nothing to help us in understanding how we are actually doing what we are doing within the brain.

      Computation alone is not a strong enough model to describe entirely what is involved in cognition because there are aspects that do not have to do with computation. Computation alone is not able to describe how we do what we do, and thus does not answer the easy problem.

      Delete
  6. This paper had clarified some questions and issues that I brought up in the comments of section 1a, which had to do with what cognition is. I had asked if cognition is simply just computation (which didn’t feel right) and this paper had given me the answer. Cognition is computation, but also sensorimotor experience, which isn’t inherently computational. It seems as if computation can accomplish almost anything asked of it, yet there is no deep understanding of the task it is asked to do. Although this may be unrelated, I wonder if our definition of intelligence is another limiting factor in figuring out what cognition is, and how we might get to a point where artificial intelligence is regarded as unrecognizable from our own.

    Although the paper makes it clear that computation alone cannot explain cognition, I think Zenon Pylyshyn was right in trying to push computation to its logical limit. Firstly, computation lays a foundation that is simple enough to grasp and has enough theoretical and practical evidence making it valid and reliable to use. It also largely explains how we come to many conclusions. The only real limitation seems to be the ability to understand/link rules and categories to meaning. Although computation leaves us with only a superficial understanding of cognition, I do believe that it lays groundwork to build a deeper model that does link computation to meaning and sensorimotor experience.

    ReplyDelete
    Replies
    1. Good points. Why do you say computation is "superfiicial"? If it can do what we can do, what more can we ask? But can it do what we can do? And isn't "intelligence" the capacity to do what we can do?

      Delete
  7. I really enjoyed reading this article as it gave me a clearer picture of the blindspots of cognitive science, and the reason these blindspots exist. The example that we had already talked about in class, about the third-grade teacher, raised a few questions and observations for me. I agree that for me, when
    I came up with the name of my third-grade teacher, I first visualized her in my head, picturing her as though she was in front of me and from there I retrieved the name. In a way, I can see why this is similar to identifying a stimulus actually in front of me, except in this case, I have somehow generated the stimulus(which is the mysterious part). Reading this article reminded me of a conversation I had with a close friend recently. We were trying to convey to each other the process of our thinking(as much as we could with the tool of introspection). My friend claimed she could not do mental visualization; she couldn’t picture faces or objects in her mind, and that when she thinks, it is more of a stream of words and feelings(though in this conversation the term ‘feelings’ remained undefined). I felt that a lot of my thinking was words and feelings accompanied by pictures(definitely pictures/images and not videos), but having not thought extensively about it before, quickly verified that I could in fact visualize faces in my head. I now wonder what process my friend would go through in order to retrieve her third-grade teacher’s name, and how the differences in our individual ways of thinking would impact the rest of our “computations”. One thing I will say about my friend is that she always excelled in algebra, while I preferred forms of math that I could visualize(eg. Simple trigonometry, vectors). It makes complete sense to me that I would have different individual thoughts and feelings from someone else, but that we would have a different thinking style completely(while we both have human brains) is curious. Both styles clearly do the job, but are some ways more optimal than others(at least for certain tasks), as Zenon was wondering?

    ReplyDelete
    Replies
    1. Yes, there are different cognitive styles, but we can all do roughly the same things: learn, recognize and remember things, and talk. Cogsci's first task is to explain how we could do that any-which-way: Does cognition explain how? If it does, then explaining the individual differences will be a piece of (vegan) cake. But does it? This week is devoted to the many reasons to expect that it can do the job. Most of those reasons have not even been mentioned yet. I'll describe some of them in class tomorrow.

      Delete
  8. As I understand it, “Cohabitation: Computation at 70, Cognition at 20” describes the limits of computation as a means to explain cognition on its own. The Searle experiment, that Turing tested a program written in Chinese (which he doesn’t speak) to determine if the computational states were cognitive states, shows that it cannot be used to mimic the human mind in its entirety. Because Searle was able to execute the program by learning all the symbol manipulation rules, he concluded that the understanding of the language was not necessary, refuting the idea that the software cognizes. That is what is meant by a mediated symbol grounding system: the symbols used are tied to referents in order to have an output based on an input and the algorithm used. But this is just a matter of interpretation. Rather, in cognition, we are concerned with the process by which the referent is associated with the symbol in the human mind, making computation insufficient for explaining cognition. The whole article highlights how complex it is to grasp what cognition is and how we often confuse cognition with the very processes that we are trying to explain.
    One example that illustrates that confusion was in class when we were asked to remember the name of our 3rd-grade schoolteacher and explain how we accessed the memory. We don’t have an answer for it, but we have a tendency to just answer “I just remembered it, I visualized myself in 3rd grade and I got their name”. In cognitive science, we want to explain the mechanisms through which memory in that case, and symbol association, in the case of computation, happen. Being a cognitive science major, a lot of the words we use (computation, cognition) are mentioned without being given a definition. I think that both the readings and the lecture incite us to have a deeper reflection on what those words exactly mean.

    ReplyDelete
    Replies
    1. Cogsci does not seek to “mimic” the mind, but to explain it. Turing proposes that computation may be the way (Week 2). How does Searle show that computation cannot understand (Week 3)? (Can ChatGPT understand?). And “association” is a weasel-word: Yes, we can “associate” seeing a dog with the symbol “dog”, but how do we recognize a dog? Stay tuned…

      Delete
    2. Turing proposed that computation (shape-based symbol manipulation) could be used to reverse-engineer cognition and solve the EP (how and why we can do everything we can do). Computation cannot help us solve the HP (how and why we can feel) because feeling is an interpretation of the computation by the feeler who has added meaning to the output. Searle shows that cognition cannot be all computation because he was the machine performing the computation and could not add feeling to the output because he did not know Chinese. When we use a language that we know, we use computation and feeling to communicate and comprehend others. We recognize a dog with feelings. We know what a dog looks, smells, feels like, what a dog can do, what we can do with a dog, memories of dogs, etc.

      Delete
  9. This reading got to the core of a problem that has bothered me with two arguments that often come up in casual conversations about cognition mainly 'The brain is just a bio-computer' and 'The mind is something disconnected from the body'. I can now recognize the first as being an simplified argument that cognition is computation and the second obviously ignores the mind-body problem. It is clear that computation can explain a lot of how we get from point a to point b. It can explain the rules in the way that if we think of cognition as just the steps we take to get from a question to an answer then it can tell us how those steps are preformed. What it doesn't explain for me is how we take the result to be meaningful that is how we connect that result to feeling. That leads to the question for me 'what is it that I really want answered?'. I don't just want to know the bio-mechanical process that leads from neurons firing in the brain to nerves in my body back to my brain. I want to know what it is that that feeling /is/ but what does that even mean if not that process the mechanical answer is unsatisfying but I struggle to understand what else is missing.

    ReplyDelete
    Replies
    1. These reflections are the ones these reading are meant to stir up. Yes, cogsci has to explain how and why humans can do all the (cognitive) things they can do. Being able to explain that (the "Easy Problem") would be nothing to sneeze at. Can computation explain that? And, yes, we want to learn the solution to the "Hard Problem" too: How and why can humans FEEL, rather than just DO. I've got some bad news for you there: Cogsci hasn't got a clue of a clue about that yet. But can computation even solve the Easy Problem? If not, what else is there to do?

      Delete
  10. This reading touched on a lot of the issues I had with the argument that cognition is purely computation, and the implications this argument has on our ability to understand the human brain and the nature of cognition. Something that struck me was the challenge set forward in the fourth last paragraph of the reading—where “autonomous, non-homuncular functions” can be used to create a more complete explanation of cognition, rather than just equating cognition and computation. From my understanding, the incorporation of these dynamic functions into our understanding of cognition could be used to ground the symbols used in computation to their real world referents. When we consider these dynamic sensorimotor functions, as well as computation, do we better describe our experience of cognition (the feeling of oneself thinking)? What would be the interaction between computational (symbol manipulation) processes and a dynamic sensorimotor process? Is the ability to mediate the “internal symbols and the external things [the]. . . symbols are interpretable as being about” the crux of the experience of cognition (or the experience of feeling oneself thinking), rather than purely computation (9)?

    ReplyDelete
    Replies
    1. Yup, those are the questions. Once we've settled on what computation really is, what it can (and can't) do, and how, we can go on to answer whether it's enough. The "Symbol Grounding Problem" will start to rear its head in Week 3 (Searle), and make a full appearance in Week 5. Meanwhile, a cameo appearance by the brain in Week 4 (shooed away by Jerry Fodor: was he right?) A possible solution will make an appearance in Week 6 (category learning and categorical perception) and then, after another cameo appearance by Darwin and evolution in Week 7, we get to the power of language, which is even greater than the power of computation -- once it's grounded (Week 8 and 9, with Pinker and Chomsky). But lately ChatGPT has reared its head... What to make of that? We close with the Hard Problem in Week 10, and a reminder of the meaning of life in Week 11).

      Delete
  11. The “Cohabitation: Computation at Seventy, Cognition at Twenty” reading, lays out a description of the historical development of computation and cognitive science. To summarize, we initially see that cognition cannot be explained by introspection; the observation and reflection on one’s thought, nor by behaviorism; the study of observable behaviours. Chomsky’s “universal grammar”, stating that all children possess an innate linguistic capacity, which is not learned but rather “built in their minds” from birth allowing for the formation of meaningful categories refutes some of the claims made by behaviorists about the importance of rewards/punishment for learning. But how exactly do we learn these very vocabulary and grammar rules? Mental imagery theory was an attempt to answer this question, which states that we are aware of images in our heads during introspection. The latter leaves us with the “little man in the head” problem, aka the homunculus, leading to infinite regress. To move past this theory, to explain cognition, Zenon Pylyshyn highlights the importance of computations, the manipulation of symbols with rules based on their shapes. But is cognition simply computation? One way to explain cognition is “to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it”, which is what the Turing test (TT) evaluates. John Searle challenged this question of whether there is more to cognition than just computation by proposing to run the TT in a language that he does not understand (Chinese). As computation relies on symbols and shapes and is independent of meaning, Searle’s program indeed passes the TT in Chinese, but this does not guarantee the understanding of a single word in Chinese. Therefore, there must be more to cognition than just computation. To potentially understand what cognition is, the reading suggests that along with computation, we also need sensorimotor capacities as well as the internal processes of the cognizer. During my first semester, in my psyc100 course, we learned about “𝐪𝐮𝐚𝐥𝐢𝐚” which is the subjective, qualitative parts of conscious experiences. Qualia consists of the unique ways we perceive and experience things, like the redness of an apple or the taste of a (vegan) cake. This aspect of consciousness is private as it is an inner experience that cannot be directly or explicitly shared with others, as the true essence of that experience is one’s personal qualia. So I was wondering if qualia is somewhat analogous to the sensorimotor capacities and internal processes, which accompanied by computation, potentially explain cognition.

    ReplyDelete
    Replies
    1. Excellent summary of many of the points that will be covered in this course. But, no, "qualia" (a weasel word for feeling) are not the answer: they are another question, the hard one!) Stay tuned. It will all gradually come together...

      Delete
  12. In “Cohabitation: Computation at Seventy, Cognition at Twenty” I found the description of the mind-body problem very interesting. I have never heard of Pylyshyn’s virtual machine explanation before and that he compared the mind-body distinction to that of software and hardware in a machine. While I agree that this distinction does not solve the mind-body problem, I think that it is a helpful way to think of thought as separate from brain function. The relationship between software and hardware sheds light on cognitive states that can be somewhat separate from purely physical or purely mental states.

    ReplyDelete
    Replies
    1. I don't know about "physical" vs "mental": they sound like "dualism" to me. And I'd say that both "mental" and "dualism" are weasel words. Try just "doing capacity" and "feeling capacity" (and trust that both are "physical," but waiting for cogsci to give a causal explanation of them -- one that actually does the job).

      Delete
  13. I found that the points concerning symbol grounding, or the connection between the symbols in computation and the meaning that they are meant to represent, were particularly striking. Namely, it is fascinating how puzzling it is trying to understand at which point meaning emerges, in which the article states that in cognitive science, unlike in logic, mathematics, and computer science, we "must explain what is going on in the head of the user" rather than simply saying "the link between the symbol and its referent is made by the brain of the user." This is followed up with a discussion on how computation itself is not sufficient as an explanation for cognition, as Searle illustrated by the Chinese room experiment. This reveals the limitations of the most basic iteration of the Turing Test, as a system such as the Chinese interpreter can appear entirely competent without there being any genuine understanding of the meaning of the symbols. I connected this with my understanding of generative AI, as these programs create their responses based on probabilities of words, rather than actually comprehending words and ideas.

    Also, like some of the other students who responded earlier, this text made me question how, and if the problem of cognition can be solved. If introspection results in an infinite homuncular loop, and computation is limited by not being able to explain how symbols are grounded in the brain, what methods does this leave us with? The article states of non dynamic functions such as analog rotations, but how are these conducted? I'm interested in learning about how "scaling up to the Turing Test, for all of our behavioral capacities" should address some of these questions, as stated at the end of the article.

    ReplyDelete
    Replies
    1. Good summary, and good questions. We'll get to all that...

      Delete
    2. One of the points you raised is exactly the thoughts I had reading this paper. If computation is not enough, what exactly are we left with to explain how the mind grounds symbols in meaning? Searle’s Chinese Room argument is quite convincing, in that his point is simple yet strikes a core question: If computation and rote memorization is insufficient to say that a Turing machine is cognizing, then what exactly is this process? Though I find this argument convincing, due to the scope of what computation is supposedly capable of, I am left wondering whether symbol grounding is not possible through some sort of computation. The sensorimotor component of cognition that the paper talks about is an interesting proposition, but I am not entirely convinced that this is not in and of itself computation. If one is receiving sensorimotor stimuli, and this stimuli transduces a response in the brain, then does this not also beg the question of how exactly the stimuli produces the specific response?

      Delete
    3. If cognition's not (just) computation, what do we have left? Every physical and chemical and dynamic process in the world (including transduction, which is not computation either). (What, by the way, is transaction?)

      Searle shows only that cognition cannot be ONLY computation.

      A computational model of a vacuum cleaner has symbols and states that are interpretable (by users) as having every relevant feature of a vacuum cleaner, and even of explaining how a vacuum cleaner works. But it cannot suck up dust. Transmit output of the program to the receptors of a VR device (including goggles and gloves) in another room and it can fool your senses into thinking you are vacuuming the dust in a room. Take off the gloves and goggles and all there is left in the room is the VR hardware (which is not a computer computing, either! Why not?)

      Delete
    4. I found your point about dynamic and nondynamic functions very interesting – as well as how each might contribute to an emerging ‘mind’. Since we don’t know the answer to the ‘how’ question - how the brain does what it does - we aren’t able to confidently know which approach is relevant to answering it. This reading mentioned that Skinner felt behavioural theories of learning are irrelevant, a word that stuck out to me in the ways that we approach the problem of cognition. How do we determine relevance? Does it not already require an understanding of the how to do so?

      Delete
    5. Your comment about the vacuum cleaner brings about the same point as the ice cube example that you used in class: an algorithm can completely encode an ice cube and its environment and how it melts so much so that if you are put into VR your senses are tricked into literally thinking you are perceiving an ice cube. I wanted to clarify your question, though: "Take off the gloves and goggles and all there is left in the room is the VR hardware—which is not a computer computing, either, why not?". Why is it that the VR hardware is not a computer computing?

      Delete
  14. This paper articulated to me the challenges we face in understanding ‘cognition’ (whatever is happening in the brain between inputs and outputs)--one major challenge being that cognition is “impenetrable to introspection” (p. 248). What resonated most with me was the idea that we need to move away from just examining inputs and outputs of the brain. This made me think about artificial neural networks, which are often great at predicting what a neural network will output given a certain input, because they learn from rigorous feedback loops and reinforcement learning. While they may be useful models for examining and testing specific networks/systems in the brain, they do not replicate the actual mechanisms responsible for these outputs, and cannot explain why these networks work the way that they do. I used to wonder why we didn’t just simulate every neural network in the brain with an artificial neural network to try to replicate a human brain, as this seemed to me like a great solution for many of our problems in psychology, neuroscience, and other fields. I now see why this was a very naive thought, as this would still not answer the fundamental question of ‘how’ these networks function together to give rise to thoughts, images, language, and more.

    ReplyDelete
    Replies
    1. It's not quite as bleak as all that. Whatever causal mechanism turns out to be able to solve the "Easy Problem" (what's that?) will really be a (candidate) solution to the Easy Problem. (There might even turn out to be more than one!) The "Hard Problem" (what's that?) may be a harder nut to crack, but the solution to the Easy problem might turn out to generate feeling too -- only there's no way we can know for sure. At least that's Turing's point, and his method. We'll get to that next week.

      Delete
  15. Reading 1b: Computation at 70, Cognition at 20

    Some of the points in the article, “ Computation at 70, Cognition at 20” puzzled me so much that they broke down what I thought I knew about the way my own mind works. One point that I found to be particularly ambiguous was our inability to explain our cognitive capacities. The discussion surrounding cognition falls on how we are able to think the way we do (i.e. recalling one’s 3rd-grade teacher or knowing that a blue jay is a bird), but we cannot rely on the answers we come up with. Therefore, in the exploration of an answer we interrogate our minds on their own functional capacity and have them try to uncover the mysteries of themselves which leads to a tricky paradox. To me, this paradox lies in the existence of anosognosia. Namely, I don’t understand why the homunculus that retrieves stored information is not a valid answer to questions referring to memory recall (especially since retrieval is a valid concept taught in psychology). Why must we even explain what the homunculus is doing? Are experiments to uncover its function even possible if we don’t know what we’re searching for? In essence, this reading confused me and left me with many questions surrounding the difficulties in the search for the extent of our cognitive capacities.

    Finally, this article reminded me of a quote by John Locke I learned in PHIL 360 which reads, “For the understanding, like the eye, judging of objects only by its own sight, cannot but be pleased with what it discovers, having less regret for what has escaped it, because it is unknown.” Essentially it simply means you are content with what you know because you don’t know what you don’t know (sometimes).

    ReplyDelete
    Replies
    1. I like anosognosia too (what is that?). It's my favorite neurological illness, and we all have it, at least for explaining how and why we can do what we can do and feel what we can feel. Our grandparents all chuckle at what we are trying to learn in this course, because it's all so obvious how we do and feel. But let's try to overcome our grandparents' anosognosia and try to find a causal explanation that works anyway. They used to think we understood the stars too, until astrophysics began to study and explain it.

      Delete
  16. Hi everyone!

    In the paper "Cohabitation: Computation at Seventy, Cognition at Twenty", it is argued that the Turing Test is insufficient to prove that a robot can cognize. I agree with this assessment. The Turing Test only measures a robot's ability to mimic human conversation, but it does not prove that the robot is actually sentient. Harnad proposes a reverse-engineering approach to creating a symbol-grounded, cognizing, non-human being. However, I believe that this approach would not create a truly humanly cognitive being. A robot that passes the Turing Test while grounding symbols in sensorimotor capacities may simply be a philosophical zombie like discussed in class.
    It is impossible to experimentally test whether a robot is sentient. Therefore, if we consider consciousness and feeling to be essential elements of cognition, then it is impossible to test whether a robot can cognize as a human does. However, I am also aware that the Turing Test is probably still the best way we have to measure a robot's cognitive abilities. But ss technology advances, it might be possible that we will develop better methods for testing a robot's sentience in the future.

    ReplyDelete
    Replies
    1. Turing's method is to try to solve the Easy Problem and give up on the Hard Problem. Do you have any other ideas for solving the third problem, the "Other Minds" Problem (which, by the way, is not the same as the Hard Problem)? What is the difference and what is the way?

      Delete
  17. As a psychology major with limited knowledge of computational models of cognition, the article’s emphasis on the importance of interdisciplinarity in approaching the study of cognition was extremely interesting. In applying these other disciplines such as neuroscience and psychology, it may be possible to fill in the gaps left by the simple, black and white separation of cognition equals computation, since we know that the answer is not so simple. The effect of sensorimotor functions and dynamical processes already explored in other domains can add insight that computation as an explanation in itself cannot fully cover. One question I had was, other than scientific curiosity, what purpose can be served by answering the questions raised when addressing the mental imagery theory? Especially with the question, “how do i come up with [the third grade teacher’s] picture?” What would be the benefit of knowing how we come up with images in our mind, and what could it be applied to outside of those specific instances?

    ReplyDelete
  18. It is striking to me how little we have progressed from the time when the article was written. What puzzled cognitive scientists back then (and all the way back to when Turing first proposed the issues) are still puzzling today. My example of explaining that cognition is a hybrid system is that my head hurts after reading the article and pondering over what to write. The physiological response during cognitive function suggests the mind and body are connected (at least, as in a human mind and body).
    But when we talk about “mind”, it is also a symbol with meaning granted by humans, thus the question is: are we really investigating what we think we are? A computer can communicate with us in English, but it may not understand the meaning of the word symbols. If asked to investigate its own thinking, the answers it gives out indeed are about its mind. Which leads to my confusion. How do we know that we are not in the computer's position above? Coming down to each individual, “cognition” and “mind” might mean slightly different things for each and everyone of us.

    ReplyDelete
    Replies
    1. Don't ponder TOO hard. Remember Descartes: You can be sure that you think, because it feels like something to think, and it's impossible to doubt that you are feeling, when you are feeling. The rest is about explaining, causally, how and why we can think, and feel. That's cogsci, which tries to reverse-engineer it. (Maybe throw out "mind," because it's just a weasel word for the capacity to do, think and feel.)

      Delete
    2. I think a lot of our capacities come from the sensory informations we receive as input. Would it be easier to reverse-engineer thinking system in human with limited sensory information, for example if they are unable of seeing or hearing? (I apologize if I appear unethical or offensive, which I am not trying to be)

      Delete
    3. What do you mean by "sensory information"? (What is "information"?)

      Helen Keller was unable to hear or to see, but she nevertheless had all of our cognitive capacities. She would certainly pass the TT, both T2 and T3. She just couldn't see or hear.

      But it would be much harder, not easier, to design a T3 model that could do what Helen Keller could do, without being able to see or hear. What would be "easier" would be to try to model normal, un-handicapped capacity, and, if and when your candidate succeeds in passing T3, to try to model (but not build!) a version that could not see or hear, but could nevertheless learn to do all the rest.

      (This is true also of the incomparably easier task of to modelling heart function. It is easier to first model a normally functioning heart, rather than one that is functioning despite congestive heart failure.)

      [This is true despite Claude Bernard's dictum that the way to learn how an organ works is to test what happens if you damage it. I won't comment on this; it's about physiology and biochemistry, not about computer modelling, so no cats or mice are being hurt so you can twiddle and tweak with their bodies and brains. Gigabytes are cheap; sentient beings are bought: but they are the ones who pay.]

      Delete
  19. I now understand why I was having such a hard time thinking of cognition as computation in 1a.
    The reading “Cohabitation: Computation at 70, Cognition at 20” explained the challenges.
    One reason is that there are still aspects of cognition we do not understand (how we are able to learn, how we are able to imagine pictures, how we are able to name and categorize things). It is difficult to determine if a process is computation if we do not fully understand the process. We tend to explain how we do things in a propositional way and not a computational way, not delving to the deepest levels of how processes work. Additionally, it is likely we do some computation subconsciously, so it is impossible to determine introspectively. Overall, it seems there are still many uncertainties about how cognition works in the mind, making it difficult to determine whether it is computation.

    Another challenge explained in this reading is that on a physical level, there is debate as to whether biomolecular details of brain function are cognitive science or some other subject (is brain hardware cognition or exclusively mind software). This complicates defining the components of computation (such as states).

    Another challenge regarding how to define cognition in computation terms, is that it is difficult to identify a symbol. A symbol can look not like what it represents, and it is not clear what symbols make up cognition.

    There is also a question of ‘to what degree cognition is computation’. Cognition requires computation (symbolic representations) but also dynamics (the processes themselves). The article author proposes the creation of a robot that passes an advanced Turing Test, with human-like behaviours and sensorimotor capacities. Based on how the machine functions to achieve this, we should be able to determine how much of cognition is computation. However, this machine may still not truly understand anything, simply perform, so perhaps it will be an imperfect tool.
    It is evident we must solve these debates and questions before knowing for sure how much of cognition is computation.

    ReplyDelete
    Replies
    1. All these points will be taken up in the next few weeks. Just a few details: If we could explain our cognitive capacity by introspecting in an armchair, we would not need to do or learn cogsci. So it's not surprising that we are "unconscious" of how we do it.

      About the brain, we'll talk (a little) in Week 4.

      About defining a symbol: ask Turing. They're the (physical) things on the tape of the Turing Machine, and its internal states. The word "apple" is a symbol, and so is the shape "2" and "=."

      But "representation" is still a weasel word...

      Delete
  20. This reading really helped make the link between computation as I usually think of it – in terms of math and computers – to cognition much more clear. My previous response in 1a was very macro, and didn’t really delve into computation itself, mainly because I was confused by its application in psychology. The section of this reading about replacing the homunculus with computation made things clearer for me, as it's pretty obvious that we can’t explain neural processes through imagery or words alone and computation is a more apt fit. While reading I was questioning how our brains know what computation to perform, and if this computational approach could also be accused of having “the little man” select the computation. The concerns about the symbol-grounding problem at the end of the paper addressed this, but I have a few questions about how the Turing test can be adapted. I agree the email test is insufficient, but I’m having a hard time understanding what the “full robotic” version would be, does it just mean it doesn’t require the “back and forth” of an email chain Also, how do we know that as humans we can perfectly identify another human? As we talked about in class, Lemoine was convinced that LaMDA had passed the test, yet the AI was actually not that sophisticated. Is there a stipulation for the test that all humans would need to mistake it for another human? I guess I’m curious if the Turing test is really the ultimate benchmark for an AI if we saw that Lemoine believed LaMDA passed but others did not, and how we as humans are meant to evaluate it.

    ReplyDelete
    Replies
    1. As an aside, I was also wondering if someone could explain what it means on page 252 that manipulation rules are based on symbol’s shapes rather than their meanings? Thanks!

      Delete
    2. Computation would be homuncular if we had to know which computation to perform, like when we’re doing long-division. But fortunately, they are done for us by our brains without our having the slightest clue how they’re doing it (as identifying our 3rd grade teacher shows). But that’s why cogsci has a lot of work to do, figuring out how our brains do it.

      The difference between the verbal TT (T2) and the robotic TT (T3) is that T2 can be just word manipulation, whereas T3 requires the ability to do in the world all the things we can do in it: seek, recognize and eat apples, recognize friends, learn sensorimotor categories (like apples and friends) and so on.

      And both T2 and T3 are supposed to be lifelong capacities, not just short tricks.

      Symbol-grounding is about learning to recognize and categorize the things (“referents”) that our words refer to in the world.

      Yes, the crux of the TT is that we cannot distinguish the candidate (whether chatbot or robot) from a real person’s verbal (T2) or sensorimotor (T3) performance capacity, lifelong. But it IS mostly a test of what you can DO rather than what you look like. So it’s not racist or sexist or speciesist…

      Words have referents (“apple” –> apples) and propositions (“the cat is on the mat”) have meanings (True if the cat is on the mat, False if the mat is on the cat). More about that when we get to language.

      Poor Blake Lemoyne got carried away when Lamda told him the thing he was most afraid of was that someone would pull his plug. (Why was Blake so gullible? Why would Lamda have said that?

      Delete
  21. A part of the reading that stood out to me was the part about cognitive blind spots. Specifically, how important is it to us to understand what happens in these cognitive blind spots? As someone who approaches things in a more biological/psychological way, I’m not quite sure there is a practical use to finding that out. I personally think that the way Chomsky explains universal grammar, that it is already “built in” to our brains is an example of a non-explanation that arises because of cognitive blind spots — especially since there is evidence of children over-generalizing grammar rules when they are younger. However, it is still a widely accepted theory in many fields today, even though it has cognitive blind spots and doesn’t really explain anything to me. So, again, if we accept and apply theories with these blind spots, is it really important to figure out what these blind spots are?

    ReplyDelete
    Replies
    1. Taking a cogsci course without wanting to know how the brain produces cognition is bit like taking an automechanics course without wanting to know how an engine produces movement. Except that auto mechanics is forward-engineering -- cars were designed and built by engineers to produce movement whereas cogsci is reverse-engineering: brains were designed and built by Darwinian evolution (Week 7) and cogsci has to figure out how and why they can do what they can do. In Week 4, we’ll discuss why, paradoxically, the brain is not the kind of organ whose function you can learn by just observing and manipulating it, the way you can learn how the heart or the kidneys work. The brain does everything we can do. Cogsci has no choice but to try to model it computationally.

      But that does not mean the brain is just doing computation.

      About Chomsky, I suggest you reserve judgment till we get to language (Weeks 8 & 9). Chomsky can’t be waved away quite as lightly as your prior courses may have led you to imagine…

      Delete
  22. I found the reading pretty fluid until I got to the paragraph entitled “Discharging the homunculus.” As I see it, the homunculus is just a metaphor that refers to the internal process responsible for the images that come up to us. This metaphor of the little man seems to me to conceal the cognitive blind spots about the functional mechanisms. Because there was no explanation of what was going on in the head, they simply use the image of a little man doing the work for us. So is the homunculus more than just a metaphor?

    While I have a basic understanding of what computation is (a rule-based symbol manipulation), I can't quite grasp the concept of dynamical functions. It seems clear to me that computation is insufficient due to the symbol-grounding problem and the influence of feelings and instincts on thought and decisions. So, how can we define or specify what these dynamic functions are? This is likely a question that cognitive science is still working on, but I hope the rest of the course will provide us with a clearer idea about it.

    ReplyDelete
    Replies
    1. In a similar way to you I am not entirely sure of the usefulness or extent of the homunculus metaphor (if that is even what it is). The way I understand it, the homunculus in the past has been used purely to attempt to answer the question about what goes on in your head during a specific process on a superficial level. I think the idea that we need to “discharge the homunculus” as Zenon says, relates to the fact that we need to look beyond the idea that there is a somewhat separate entity that is coming up with all of the answers for us. This would in theory lead us to more concrete conclusions about functional questions. In general, the entire metaphor is a little difficult for me to understand because I can’t visualize. So, it’s unclear to me whether the homunculus is used only as a theoretical construct created when people attempt to understand how they produce an answer or if it is something completely alternative.

      Delete
    2. The homunculus is YOU. And what cogsci is trying to explain what is going on inside you without inventing another homunculus in there.

      Delete
  23. Throughout reading this piece, the main idea in my mind is how the majority of people do not have much or any idea at all of how their cognitive process works, and even myself as a person studying psychology. I also enjoyed the third grade teacher question connection in the reading to when Professor asked us that in class; however, I think the subtraction one made me think about is even more because the simple concept of subtracting 2 from 7 seems so simple and even pointless to think about, but when diving deeper into words and how our brain knows and understands, it gets interesting and complicated at the same time. It brings to question what we can do about this, but also if we need to do anything about the lack of awareness of one’s cognitive process, is it necessary for the average person to attempt to understand it? Personally, I do not think the average person needs to explore the depths of how their cognitive process works, but it is valuable for some people to investigate and know how our cognitive processes work. I am interested to hear other people’s opinions on if anything needs to be done at all on the average person not knowing much about their cognitive process.

    Additionally, the idea of rejecting the Homunculus theory stuck out to me, and makes sense as the idea of little people in our heads creating our thoughts and behaviors seems like an infinite cycle. It reminded me of when I watched the Inside Out movie for the first time and thought about the emotional people and if they have a little person who controls other emotions in their head as well, and the cycle is infinite. The specific sentence, “Stop answering the functional questions in terms of their decorative correlates, but explain the function themselves” stuck out to me when explaining why to discharge this theory because I think it highlights the idea of needing to dive deeper and ask questions that have never been asked or attempted to be answered before. While the reading dives into many theories and ideas that have been incorrect and false, these ideas are essential to rule out to get closer to the actual answer to how cognition and computation work together and actually work individually as cognitive processes.

    ReplyDelete
  24. This week's second reading builds upon the previous ones. Initially, we attempted to define Turing machines and computation. In this second reading, Zenon Pylyshyn adds another layer of depth by asserting that cognition encompasses more than pure computation. While the Turing test may offer insights into certain aspects of cognition, it does have notable limitations. For instance, it relies on imitation and cannot assess consciousness, just to name a couple. However, this prompts us to consider and explore alternative approaches. One such approach would involve integrating computational elements with dynamic, sensorimotor components. In essence, how we interact with our environment also plays a crucial role in this model. Additionally, I now realize that there was more to how I remembered my third-grade teacher's name than just computations. The way I linked the image of my teacher to his name through mediated symbol grounding is only part of the explanation. It's "mediated" because it's influenced by my personal experiences, which enable me to connect the word to the image. Computations alone could potentially link the names of every person on earth to their images given the right information, but mediation would still be necessary to initially identify those individuals. It's this "dynamic" and "mediation" aspect that we're striving to understand and replicate.

    ReplyDelete
    Replies
    1. (1) Reverse-engineering and Turing-Testing is not imitation. (Why?)

      (2) The other-minds problem makes it impossible to observe feeling, only behaviour and physiology.

      (3) Yes, cognition is very likely to turn out to be hybrid: sensorimotor/computational.

      (4) The capacity to interact with the environment (human and mushroom) is what the TT is all about.

      I'm not sure what you mean by "mediation." And remember that it is generic cognitive capacity that cogsci is trying to reverse-engineer and explain, not one particular person.

      Delete
    2. Reverse engineering is not a form of imitation because we are not merely attempting to produce a replica of a system that we have, rather we are trying to figure out how these end processes are able to occur. By creating models that we can test, we are working towards producing the end results that would then lead us to an understanding of how our own minds work. Similarly, the Turing Test is used to attempt to solve the easy problem not through a mimicry of the human mind but by producing a machine whose responses are indistinguishable from humans for a lifetime. Thus it is not trying to imitate cognition but rather create it in a separate form.

      Delete
  25. I would like to offer an objection to Searle’s argument that passing the Turing test does not necessarily mean a cognitive process is behind it. Searle’s argument involves to the lack of concrete definition of the term ‘understanding’. Searle seems to be trusting the commonsensical definition of ‘understanding’: ‘grasping the meaning of symbols’. Searle claims that, since he has computational understanding of Chinese (i.e. ability to write coherent emails in Chinese), but no knowledge of the meaning behind the Chinese symbols; he does not have understanding and thus this is not a cognitive process
    But consider the following scenario: a middle school child is just starting to study mathematics. He is taught basis mathematical functions (e.g., addition, subtraction, division…), but is blind to the actual meaning and reasoning behind these functions (e.g., axioms, proofs…). They can produce correct results using these rules. Now, we could say that the child ‘understands’ mathematics. Yet, they possess computational understanding, but no grasp of the meaning behind the symbols, just as Searle in his thought experiment.
    This parallel scenario exemplifies how Searle’s weak definition of ‘understanding’ is not sufficient: computational but no meaningful grasp of an algorithmic task does not consistently mean a lack of understanding. Given this weak definition of “understanding”, it is ambiguous whether Searle’s process is or is not cognitive, thus proving the impotence of Searle’s counter argument to the effectiveness of a Turing test.

    ReplyDelete
    Replies
    1. Yes, Searle's argument is that he is exactly like the beginning algebra student who is just executing the recipe for finding the root of a quadratic equation without knowing what a root or a quadratic equation means. Searle really doesn't understand Chinese! He's just executing an algorithm. We don't even need a definition of understanding to see that. (But later in the course you will get a definition (or at least a hypothesis): T3-grounding + what it feels like to understand ("hard problem").

      Delete
  26. Harnard asks a very relevant question in Cohabitation: Computation at Seventy, Cognition at Twenty. On p252 he inquires ‘But if symbols have meanings, yet their meanings are not in the symbol system itself, what is the connection between the symbols and what they mean?’ This is rhetorical and more a question to emphasize the complexity of computation.
    I was instantly reminded of my class COMP 230 in which we approached computation and formal systems. There, the link between symbols and their meaning is referred to as “interpretation” or: structure together with a mapping from the non-logical symbols in the language to the elements of the structure. Usually, it is a function.
    Can computations be equivalent to functions? Can brain processes be as well?

    ReplyDelete
    Replies
    1. No matter what they say in COMP 230, the "interpretation" of the symbols in a computation (e.g., what it means to find the root of a quadratic equation) is not in the computation (algorithm, recipe) but in your head. So you have to turn to cogsci to find out what is going on in your head. Right now, in the first week of this course, we are still trying to get clear what computation is (not what its interpretation is). Next we'll look at cogsci's hypothesis ("Strong AI" or "computationalism") that what's going on in your head is... computation too!

      Stay tuned...

      Delete
  27. When look into cognition, behaviorism is question-begging, and introspection is not effective because we are blind to our own blind spots. As Zenon suggests, a “successful cognitive theory must make [the computation that occurs in our heads] explicit so it can be tested.” (Imagery, therefore, is not a sufficient explanation.) In cognitive science, we must explain the things that are going on inside the head of the user - for example, what is happening in your head when you connect the written word “cat” to the referential object.
    Can this process, then, just be explained as a “computation”? According to Turing, “if there is a system that can do everything a human can do indistinguishably from a human,” we can explain cognition using/as a computation. It is quickly made apparent, however, that using the Turing Test (TT) is inappropriate in this situation. As Searle shows through his Chinese Turing Test thought experiment, symbolic manipulation can be performed without understanding. Hence, the TT-passing program is not cognitive but merely “a series of symbols systematically interpretable by us as users with minds.” The question we are left with, then, is to come up with an explanation of how symbols in a symbol system can be connected to the things in the world (without answering that the connection is mediated by the human mind, because this is the object whose functions and capability we are trying to explain!)

    ReplyDelete
    Replies
    1. Yes, homuncular explanation is not reverse-engineering and it cannot explain how to generate the capacity.

      Delete
  28. This reading covers what the study of cognitive science is, as well as a few approaches that were used in an attempt to understand cognition. The goal is to understand the ‘intervening internal process’ between the computational steps that we are able to write out, or in other words, ‘make the implicit computations in our heads explicit’. Methods such as introspection, behaviourism, the Turing machine model, failed to reach this goal, because of all sorts of cognitive blind spots, or the fact that it can only explain it partially, and it leads to more questions/problems e.g.the symbol grounding problem . This made me wonder what other methods can be used to bring us a step closer to understanding cognition?

    ReplyDelete
    Replies
    1. Cognition is not just computation. But computational modelling can be used to test what else is needed, e.g., sensorimotor capacity and analog processing.

      Delete
  29. The “Computation at 70, Cognition at 20” reading got me reflecting on the scope of what computation can do. How can the symbol system used in computation be adapted to create the same linkage between the symbol and its referent as the human mind does?

    In the Discharging the Homunculus section, it was stated that the computation inside a human’s brain, like when it tries to recall something, is invisible and impenetrable to introspection. When I try to dig into my memories, I’m not aware of what exactly is happening inside my brain or the steps involved in remembering; it just happens without my conscious control. Successful cognitive theory should aim to make these implicit computations explicit to test them on computers and see if they work. Relying on subjective experiences can be misleading and doesn’t accurately reveal how our minds really work. In essence, the call to make implicit computations explicit offers a promising avenue for advancing our understanding of cognition and the role of computation within it, but this means we would need to explore the implicit mechanism of our unconscious minds.

    ReplyDelete
    Replies
    1. Yes, and that's Turing's method. To model it computationally, and then build and test it, both verbally (T2) and robotically (T3).

      Delete
  30. Notes 1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20
    The symbol-ground problem stood out to me the most in this reading. Pushing back against Pylyshyn’s ‘all cognition is computation’ by saying there ‘must be some dynamic processes occurring’ is more convincing for the explanation of cognition.
    My boyfriend has congenital anosmia (no perception of smell since birth) and my time with him has demonstrated the importance of sensory perception on feeling/meaning of smell symbols. He has learned to react the same way to smells as others and talk about smell as if he had experienced it, acquired through observational learning from birth (computational part). If someone brings up smelling or smells, he can even fool people into thinking he can smell if he doesn’t feel like explaining his anosmia (behavioural equivalence). When he asks me about how it really feels to smell, I can give him all the details possible but eventually it just ends up coming to “you have to experience it at least once to be able to truly feel smells”. The same place I get stuck in my explanation of smelling to him reminds me a lot of where we’re getting stuck at explaining cognition, you just have to do it to know the cognitive process of smelling.

    ReplyDelete
    Replies
    1. Hi Kaitlin, thank you for sharing this story. It's truly intriguing and has genuinely amazed me that something like this could occur. (I really hope the best for you and your boyfriend!) It has made me contemplate if comprehending all the intricate details of how a person experiences a smile can equate to actually experiencing one. What is lacking in this understanding that prevents it from completing the overall picture? My best guess would be that one missing element is the aspect of 'feeling' or, more precisely, emotions. Everything seems explainable in terms of behavioral concepts, except for the emotions or feelings experienced in that moment.

      Delete
    2. Helen Keller could not see or hear at all; that's a much more severe sensory deficit than anosmia. Yet she managed to learn everything called for by the Turing Test: How?

      But the answer is not feeling, although she did have feeling through all her intact sensory capacities, because feeling is unobservable (except to the feeler). So the TT cannot test it,

      Delete
  31. This might seem a bit easy as a question but when talking mental imagery of and/or picturing anything in our head what would that mean for people visually impaired/blind ? I know that we have dismissed this theory quite easily – even though we cannot argue against the fact that mental imagery does play an important part in how we construct our thoughts. However, would it not be interesting to study cognition with and without vision and compare. Maybe it would even broaden our view of cognition – cognition is not a singular concept but maybe differs depending on each individual.

    ReplyDelete
    Replies
    1. See the other replies about Helen Keller and sensory deficits. And read a bit about sensory plasticity.

      Delete
  32. This reading brought to mind much more than I could put in a small paragraph, so I'll choose only one of my commen-questions.

    In anthropology there's often a distinction between Iconic, Indexical, and Symbolic signs (signs can be treated qua symbol, as we mean symbol in this seminar), being respectively that which directly represents something (as a pictogram), that which causally points to a related notion (some hierogliphycs for example, that refer to a sound related the drawing) and lastly: Symbolic signs that are "unmoored", and derive their meaning only from the other members of its symbol system. When dealing with neural computation of physical symbol systems, is there any such distinction? Amongst the physical symbol system of the mind, iconic could be direct re-creation of an external stimuli, at once point the light from your teacher's face must've reached your retina, of course things get more complicated as we climb onto Indexical , and the leap in abstraction from Indexical to Symbolic even moreso, yet when dealing with computation qua cognition, I cannot help but wonder if all symbols are made alike.

    ReplyDelete
    Replies
    1. I suggest you set aside the Saussure/Peirce terminology and concepts for this course. We are talking about computation now, and the notion of "symbol" in computation is not particularly Peircian. Even when we get to language, a word (also a symbol) will be like the Turing notion, except that it will have a referent ("apple" --> apple). And then we'll get to propositions ("the cat is on the mat"), which are subject/predicate strings of symbols, with truth values T and F, again not much to do with Peircian semiotics.

      Iconicity will only become relevant when it comes to imitation and mirror-neurons, and eventually miming and the origins of language.

      Delete
  33. I got really hung up on Pylyshyn’s theory that all cognition is computation. I understand that this theory is refuted by the symbol grounding problem, which shows that computation is insufficient to explain the way that human minds, if they are only symbol systems, link the symbols they manipulate to semantic meanings. However, Pylyshyn’s definition of cognition seems so narrow that I feel like the symbol grounding problem barely applies. Pylyshyn says that all computation is cognition, and computation is defined as processes that can be modified by explicit statements. Anything that does not meet this property is subcognitive. If this is true, then what /does/ count as cognition? His definition excludes things like memory retrieval, perceptual states, and emotional processing, none of which we perform using explicit rules and which can’t be modified by changing our beliefs. Does Pylyshyn’s idea of cognition, then, include so few functions of the mind? The symbol grounding problem does work as an answer to his purely computational conception of things like mental math and language processing, but I think it’s funny that no refutation of a computational theory of memory and perception is necessary because they’re excluded from his definition of cognition.

    ReplyDelete
    Replies
    1. Very good points. In fact, Pylyshyn's notion of the "subcognitive" is again just as homuncular as the other options he rejects as homuncular (mental imagery, mental propositions) because he slips in a homunculus there too! -- The "user" of the "functional architecture" (which is like a digital computer and its user-interface).

      Anything below that user "level" is dubbed "subcognitive." But it's all homuncular. To really banish the homunculus you have to set aside the hard problem, and what is and isn't mental. What goes on in the head is either producing cognitive capacities or vegetative capacities, or some combination of both. There is no mental/homuncular "level," and "processes" that are "above" or "below" it. There's just the mechanism that generates our cognitive and vegetative capacities, and cogsci is trying to reverse-engineer how.

      Bracket the hard-problem and the fact that thinking feels like something. Leave it till Week 10 (but don't expect much...)

      Delete
  34. I agree with Searle about the insufficient of the Turing test; the entire test feels like a representation of a weasel word…. What are the criteria we are using for the Turing test to be considered cognizant (and/or human)? What aspects of cognition are we searching for the TT to accomplish? Is it language – because if so, then the TT is already successful. Is it evoking emotion in the other? Is it just reading, understanding, and responding to the emails sent by the human? HOW are we looking to be convinced?
    Further, the concept of “implementation-independence” does not seem like an accurate comparison to human cognition given that it doesn’t account for development. Are we implying that babies do not cognize? If so, would that make them less human? If not, differing levels of plasticity, semantic knowledge, experience would not allow us to “run” an adult computational system on a toddler, thus, invalidating the TT.
    If we consider the symbol-grounding problem, and how Harnad proposes we deal with this, are we on the brink of experiencing a real, sensory-motor, higher order cognitive processing AI machine that cognizes? And if so, will this actually answer any of our questions about HOW we and/or it perform these computations / cognitions?

    ReplyDelete
    Replies
    1. The way to answer questions about HOW something is caused is to discover a causal mechanism that can cause it, and then test it to show that it does. Cogsci is trying to discover, test and explain how the brain (or anything else that can do it) causes cognitive capacities (including all those you mentioned in your question), equivalent to and indistinguishable from ours.

      "Implementation-independence" just means that if the way the brain causes our cognitive capacities is purely computational (i.e., algorithmic), then there are many different physical systems that could execute the same algorithm. If, however, algorithms are not enough, then implementation-independence is irrelevant.

      Delete
  35. The Turing Test seems to take a behavioralist approach to defining cognition, seeing as the Turing-Test-passing program was initially believed to have cognitive capacity as soon as its behavior became indistinguishable from the behavior of a human. Searle's Chinese room hypothesis effectively reasons that such a machine still has no understanding, and is merely performing input-output operations as it is programmed to. This argument extends to computation in general - showing that computation alone is inherently non-cognitive.
    I still do wonder whether a computer program could, after all, be made to be cognitive - that it could feel, and experience subjective experience. Its programming would have to go beyond the behavioralist approach to understanding consciousness, and would need to start somewhere more fundamental for one to make it have mental states indistinguishable from those of a living thing. This was touched upon in the last reading with the idea that a program may, in theory, be able to simulate a billion neurons such that we would have a digital brain; even then, I wonder whether such a thing could feel. Is there something uncomputable, unsimulatable, in our conscious makeup? Or, alternatively, could our consciousness, in all its subjectivity, be simulated in a somehow simpler way than the all-too physical simulation of neurons?

    P.S: I'm sorry to post this so late in the week! The readings were incredibly interesting.

    ReplyDelete
    Replies
    1. .Your questions will all be touched on when we get to Searle.

      TT is passed if the capacity is lifelong. Not a 10-minute trick.

      You said what Searle concludes, but you don't say why.

      You wonder about feeling (the hard problem) and ask whether simulating neurons computationally would do the trick: See the other replies about simulation, modelling. VR, vacuum-cleaners and icecubes.

      Delete
  36. One part of this reading that really got me thinking was the section regarding the hardware/software problem. Despite its lack of explanation of how the symbols involved in cognition are grounded, it seems a very powerful and intuitive account of cognition. The idea that that mental states are computational states is not refuted by Dr. Harnad in the article, only that this did not explain cognition in the way Pylyshyn claimed it did.

    The idea of mental states as computational states, the software that the hardware is running intuitive imagery of how it is that all humans are capable of doing the same things (for example abstract thinking, memory recollection, etc.). We are hardware running the same software and what is consistent across everyone are the mental/computational states.

    However, this imagery, helpful as it may be, does little to explain where the software comes from, how it is the software works, and why it is that so many different people (with different hardware) can enter the same computational states despite evidence that the human nervous system is not consistent across all people, rather, it shows some amount of diversity. I don't feel very convinced by the argument that "The same software can be run on countless, radically different kinds of
    hardware, yet the computational states are the same, when the same program is running." (Harnad, 2005). I feel as though there is something left to be explained. How is it that every human is capable of the same feats regardless of the diversity of hardware?

    ReplyDelete
    Replies
    1. Implementation-independence is a real property of computation. But that's only a property of cognition if cognition is just computation. (We haven't gotten there yet.) It does not help with either the Easy or the Hard Problem.

      Delete
    2. I also found this aspect of the reading to be particularly interesting, specifically due to the effectiveness of the imagery, as you stated. The idea of us being 'hardware' all running the same 'software' to explain mental states as computational states is effective in that it draws a clear connection between mental states and ideas we are very familiar with, like hardware being required to run software. However, to further develop the issue you touched on, we all understand that software must be constructed by developers who ensure it can be compatible on different types of hardware. However, in the case of mental states, the question of who created and how was our 'software' created in such a universally compatible way remains. Is it possible that individually these computational states are not the same, and we only perceive them to be due to the limitation of language allowing us to describe our own state?

      Delete
  37. I'm surprised by everyone's lengthy posts and enjoyed reading them.
    I liked the article's discussion of 'homunculus' and 'mediated symbol-grounding' in describing the complexity of abstracts compared to computing. It raises the question of how these symbols are connected to the real world which allows a human or a computer to understand and interact with that world. From my understanding, cognition isn't just about manipulating symbols but also connecting them to experiences.

    On another note, the third-grade teacher example reminded me of a showing I read about demonstrating differences in linguistical processing and input, where a group had to count and remember their number while the judge would intermittingly chime in with a different number to confuse them. Surprisingly, there was a group that counted and remembered their numbers not by their symbolic representation but as abstract ideas, so they would not be confused when a different number was hollered. Apart from the linguistic advantage, this method of counting showed a deeper comprehension of numerical concepts beyond mere symbolic association and an example of cognitive diversity in just how we process and internalize things.

    ReplyDelete
  38. From what I understood, this reading explained that the fundamental question about cognition is: How do we go from input to output? The reading reveals that introspection theories, like the imagery theory, do not offer a useful solution to this question. Introspection only allows us to postpone our questions without providing any real solutions.
    The reading then presents cognition as strictly computational, as an alternative. To view cognition as computation means that between the moment we receive the input and the moment we produce the output, something happens that we do not have explicit access to. Our goal is to formulate hypotheses and conduct experiments to validate or refute our theories.
    However, one problem with this approach was its restriction to cognition as purely computational. Evidence has shown that dynamic systems exist in the brain that can produce behaviors typically thought to be achieved through cognition.
    Furthermore, Searle demonstrated the limitations of computation with the Chinese Room Experiment, showing that computation alone is not sufficient to understand cognition.
    This reading concludes by suggesting a mixed approach to cognition, where cognition is understood as a combination of dynamic systems and computations.

    ReplyDelete
    Replies
    1. Yes, there are dynamic processes in the brain, but (according to the Strong Church/Turing Thesis) those could be simulated computationally. So that does not quite settle it. What, however, could not be replaced by a computational simulation, in passing TT?

      Delete
  39. This reading was quite interesting, because it challenged how I categorize consciousness, which is when one exhibits behavior similar to me (i.e. similar reactions to different scenarios) and showing certain feelings (i.e. empathy). However, when Searle mentions that the Turing-Test is flawed because it is no more cognitive than any other symbol system in logic, mathematics, or computer science, it made me question whether my definition is flawed. Because even though no one will really have the same behaviors and what not, it is not too hard to teach a program to mirror some emotions/behaviors. Which once again raises how do we know if others are having conscious experiences similar to our own, when it is so hard to even describe our own conscious experiences.

    ReplyDelete
    Replies
    1. We'll talk about mirroring in Week 4.

      You haven't grasped Searle's argument yet (Week 3).

      And what is the "Other Minds Problem"?

      Delete
  40. "Cognition is impenetrable to introspection." This phrase makes me think that conclusions reached by introspecting may be interesting, but often teach us more about the process of introspection than about other aspects of cognition. I tried applying this technique, but when I attempted to think about by own thoughts, I found it challenging to access the underlying cognitive processes. Instead, I simply became more aware of the act of introspection itself. (In the text the use of implicit and explicit was a bit confusing to me) From what I understood, introspection can only teach us about conscious and explicit processes, since we are being aware of our reflections. If I am doing something implicitly, then by definition I do not know I am doing it, so introspection will not be able to give us insight on these processes. However, cognition is also interested in the implicit, which is more similar to a computational process (or even dynamical, as long as it effectively accomplish the task). For example, the imagery theory where introspection is applied to be aware of the images in our heads fail to explain how we generate and identify these mental images implicitly, leaving unanswered questions about the underlying cognitive processes.
    Hence, I believe that the seminar structure of this course is a good way of exploring and understanding implicit processes of thinking. As we get together, talk about the theories and reflect, we can gain diverse perspectives on the subject that can lead to a more comprehensive understanding of implicit cognition.

    ReplyDelete
    Replies
    1. Yes, introspection doesn't explain either how we do what we can do, whether we do it implicitly of explicitly. But that doesn't imply that what we do implicitly is computational (nor even that what we do explicitly is noncomputational).

      Delete
  41. Harnad's discussion on the cohabitation of computation and cognition raises intriguing questions about the nature of human thought and the relationship between symbolic computation and sensorimotor experience. While Zenon Pylyshyn's approach seemed to suggest that everything cognitive could be reduced to symbols and rules, John Searle's Chinese Room argument challenged this notion by highlighting the importance of embodied, sensorimotor interactions in cognition. This debate reminds me that the mind is not just a computational machine but a complex interplay of physical and mental processes. It invites further exploration into how sensorimotor experiences and symbolic computation are integrated in human cognition and how this understanding can shape future research in the fields of artificial intelligence and cognitive science.

    ReplyDelete
    Replies
    1. "physical and mental processes"? You mean dualism?

      Delete
  42. This paper gives me a new perspective on cognitive science that I never had when I was taking other cognition courses, and it really intrigued me into questioning the mystery of the human mind. The Paper tells us what cognition really is, how it is related to computation, and how it is not all about computation. I agree with the author’s opinion from the last two paragraphs that talk about potential ways to solve symbol-grounding problem. From what I learned in a class called Computational Psychology, the semantic distributional models learn the meaning of language in a way that mimics how humans learn languages – the meaning of a word is determined by its context. The model will be trained on language materials, and the meaning of a particular word is updated whenever it is encountered in those language materials by the model. It seems to me that the meaning of words is in the symbol system. Then can we say that the model actually knows the meaning of words and is thus cognitive?

    ReplyDelete
  43. The reading left me wildly intrigued. Especially with the state of, and development of, LLMs (large language models) and artificial intelligence. Having played around with the various GPT models and now ChatGPT, I have some thoughts (inspired by section “Computation and consciousness”). Taking Descartes “I think therefore I am,” and filtering a LLM through it, we know that it cannot think, thus “understand.” (Though when you ask ChatGPT if it can understand or think, it will reply something along the lines of “yes, but not in a human way,”—I personally think they fed it too much Terminator.) In reference to the text, where the hardware-software distinction is mentioned, it seems that key and crucial differentiating factors are that humans have the ability to think (and know it) and to experience qualia—the subjective feelings that come with this. So using the hardware-software distinction is limited in its depth. However, I did find it an enjoyable exercise to think about, especially with the development going on nowadays. I’m personally of the belief that our minds cannot be replicated to an exact point, but I do think that one day we will get close. And when day comes I’ll come back to this paper, and this skywriting.

    ReplyDelete
  44. “Cohabitation: Computation at 70, Cognition at 20,” presents and analyzes the views of several different scientists in terms of what they think “cognition” entails. In previous classes I’ve taken, cognitive processes were only briefly discussed and in more general terms, and so I have been looking forward to delving deeper into understanding how they work. Thus, the recurring questions of “how?” was a theme throughout the paper that caught my interest— it was very effective in pointing out the flaws in cognition-related theories. For example, the introspectionist viewpoint regarding the creation of “mental images.” A theory which was criticized in this paper as leaving more questions than answers, it produced a lot of “how” questions: When people try to remember in this way, how exactly is each step in this process carried out (Harnad, 2009)?

    Furthermore, through this article’s questioning of theories by asking “how?” I also recalled specifically learning in past courses about how cognition can be seen as studying the “in-between” of mental processes. Continuing to branch off of the introspection example, my understanding regarding the flaws of this “mental image” theory is that explicitly stating and identifying the experience itself does not necessarily tell us the “in-between” functions or mechanisms that underlie this experience. As well, based on my previous knowledge regarding introspection, I believe that one of its disadvantages was the subjectivity of its methods— thoughts, feelings, or experiences could result in many different outcomes depending on the person. Therefore, not only is there an issue with figuring out the “how” of cognitive processes, but there also seems to be another issue: How and what is the best method to study this? The point of “being cognitively blind” influencing our understanding of cognition was mentioned in the reading; thus, how do we reach objectivity and avoid our own mind affecting research results in ways we may not initially realize? Flaws were not only present in introspection, but also in relying only on computation and behaviorism. Thus, this reading demonstrated how knowledge is always changing as we discover more and use different models in an attempt to better understand concepts-- with this in mind, how has the symbol-grounding problem, as well as cognitive science research overall, progressed over the past years in helping us to understand how these processes work and interact with each other? All in all, this paper truly highlighted the complex nature behind understanding “cognition.”

    ReplyDelete
  45. I might be about to throw out a bunch of weasel words and give a non-computational answer but what if we have pre-memorized simple things and just synthesize them to perform more complex computations? For example, if you memorize all the additions from 1+1 to 9+9 you can synthesize those to be able to perform any addition. Similar in categorization as well. You synthesize the new stimuli with your previous knowledge: what you've memorized. We'd be able to reach that memory through association, trying to find a close relation between the stimulus and our memory bank. Whichever pre-existing category the stimulus matches, we categorize it as a "chair", for example.

    This reading is definitely making me question how I think and how my brain does things every day without me even noticing. It's so interesting to be able to read it and be like "I do do that". It's like zodiac signs but more scientific.

    ReplyDelete
    Replies
    1. You bring up a very interesting point! Although this is certainly still insufficient as an almighty answer to the hard problem, it is a unique and thought-provoking way of thinking about computation and cognition –and one that made me think of an existing neurolinguistic theory of the brain as well. Namely, what you are describing made me think of the concept of semantic networks, which is a theory of memory and meaning representation in the brain. In simple terms, it explains that “meaning” is represented in nodes that are connected together in a network; each node is a concept, connected to different nodes through association or overlap in meaning, and once it is activated, this activation is automatically spread to connecting nodes –-overall, this interconnected network gives rise to the meaning of “things”. Now, this theory is not without flaws, but it could be a potential start for an explanation of how the brain’s expansive neural connectivity gives rise to cognition. It could also be seen as a potential solution to the symbol-grounding problem, although I am sure that there are several defects in its argument that we are not taking into account yet. Nonetheless, it is a very interesting way of looking at the brain’s cognitive and categorization capabilities.

      Delete
  46. This reading was fascinating as it described very thoroughly the ways introspection, behaviorism and the mental imagery theory (supporting Chomsky’s universal grammar) failed to explain cognition.
    As Pylyshyn’s thesis brought us back to the association of cognition with computation, I thought it would be interesting to make a parallel between Plato’s allegory of the Cave and Pylyshyn’s understanding to explain the conclusions reached by Searle. Searle's sino-spoiler thought experiment posits that purely computational systems, may not truly understand the meaning of the symbols (shapes) they manipulate, just as Plato's prisoners in the cave may not fully understand the world outside based on the shadows (shapes) they see. In this sense, Searle might have proven in some way that manipulating the symbols and rules within Pylyshyn's computational framework might only give you the illusion that you learn things about cognition and the mind, but you still have no idea of what is outside this symbolic system (Cave) - a sensorimotor system: there is not a full engagement with the truth and the real structures of human understanding. (I hope the analogies are clear and hopefully help some people :) )
    In this sense, maybe the debate about cognition has been more about the details and degree of sensorimotor involvement rather than questioning the fundamental role of computation.

    PS: Sorry for posting the skywriting now! I woke up in the middle of the night and realized I forgot to post the comment.

    ReplyDelete
    Replies
    1. I didn't understand the analogies.

      Delete
    2. Hi Mamoune!
      I read up on Plato’s allegory of the cave because I was not familiar with it. I see where you could think of a computer computing similar to a human ‘cognizing’ or computing something except the computer is imprisoned in a cave chained to a wall only able to perceive only shadows. You could propose that they lack the sensorimotor association, which is the key to cognition and meaning. However, the allegory seems to be used more for a philosophical argument relating to the theory of forms and the moral duty of those who have seen the ‘Good’. It makes a distinction between perceiving and reasoning, while we are more interested in defining meaning.

      Delete
  47. We often see the brain as computer analogy in cognitive science, however, as we have seen, computational machines are not sufficient to be cognizant. Computations only simulate dynamical processes. The capacity to feel and be empathetic is perhaps one of the biggest gaps. The AI chatbot Tessa which was used by the National Eating Disorders Association (NEDA) is a great example. It gave dieting advice instead of offering support to people with EDs, a problem that arose when it moved from a rule-based program with prewritten responses to a generative system. Therefore, human presence and guidance are vital and cannot be replaced. On the other hand, we have seen machine-learning AI AlphaGo surpass a Go world champion and show innovation by making an unexpected move, Move 37. Understanding how it achieved that would change the cognitive science scene.

    ReplyDelete
    Replies
    1. What is computation? And what is the Turing Test?

      Delete
  48. For this week’s readings, I noticed that as I read each article, I was learning more about each topic but realized that the deeper the article dove into each topic, the more I realized I didn’t understand. The first reading, “What is a Turing Machine?”, was the easiest to understand of the three. From my understanding, a Turing machine is a computing device that has the ability to manipulate a finite number of physical symbols (instructions are assigned to each symbol by an algorithm(?)). I think I can grasp the very basic idea of a Turing machine, but something that I would like to learn more about are the limitations of a Turing machine besides uncomputable numbers and functions (which may be related to different types of limitations). After reading the second article, “What is Computation”, I am very curious to know what fields that can be boiled down to computation. For example, the reading uses DNA translation as an example of computation, but I had never even considered that human/biological processes could be essentially simplified into an algorithm + computation. More specifically, I am interested in which characteristics of human behaviour/function can be excluded in the formal definition of computation, and why.

    ReplyDelete
    Replies
    1. What is the differences between something that is itself doing computation, and one that can be modelled by computation?

      Delete
  49. I accidentally put my skywritings for reading 1b in 1a comments, and 1a in 1b comments. Sorry about this.

    Here is the comments for reading 1b:
    “Computation” looks like a very complicated word, and it seems to have a deep connection to computer science area which I was not interested at all and quite scared of. And how is it related to being in a psychology /neuro course? However, after reading this week’s readings, I have a better understanding about how computation is related to neuroscience.

    I learnt that every neuron has its own digits to represent. They are connected to each other and accumulate to a peak at which they can spike. This demonstrates more of what I have learnt in my neuroscience course and it actually draws a clearer image to me. With the help of the digits, it shows a network of filled with dynamic systems where demonstrate and predict how different areas of the brain function, like what they will do when they detect a move, in a digital way with the temporal element. It is such a fascinating way to describe the phenomenon in our brain in a very accurate and detailed point of view. I believe it is developed more in medical science, might be super helpful to the patients who have neurological problems/disorders.

    ReplyDelete
    Replies
    1. What is the task of cognitive science? and what does computation have to do with it?

      Delete
  50. In class we talked about the quadratic equation and how many of us know how to use it, but we don’t know what it really means. We can use it in math equations because we know the rules, but we wouldn’t be able to reverse engineer the equation in order to see how it came about. Searle’s explanation of how he could do the Turing Test in Chinese is exactly this idea; he would theoretically be able to memorize the symbol-manipulation rules without actually understanding them. This makes me think that although we have rules and organization in our heads, and are therefore able to compute, not everything is computational because the computation doesn’t give us answers to how the meaning comes about.

    ReplyDelete
    Replies
    1. Well, yes. But still a bit uncertain. We, doing algebra that we don't understand, but knowing how to execute the algorithm, is really like Searle passing the TT in Chinese without understanding chinese. But that has nothing to do with "reverse-engineering" the solving of a quadratic equation.

      Delete
  51. The topics covered in this reading are interconnected by posing question after question, all stemming from an unsolvable query. It feels like opening the door of a maze only to find that more doors await discovery. During this maze-like adventure, we have closed the doors on behaviourism and introspection. Cognitive science cannot rely on introspection, as it does not lead to a functional explanation of cognition. In relation to social psychology, introspection only buries previous solid thoughts deeper and fails to unearth any new thoughts or pathways that could provide any objective explanations. On the other hand, computation seems to be heading in the right direction. However, assuming that everything can be simulated through the manipulation of symbols by computation raises the question: as we construct these symbols, does it imply that we can allow these inventions to reconstruct us? It feels like a process of self-reflection.

    ReplyDelete
    Replies
    1. I think it is also important to note that just because cognitive capacities could be reconciled with computation, it doesn’t necessarily follow that that is how it works in reality. Shepard’s mental rotation task could be performed computationally using Cartesian formulae but we know that this is not what happens in practice (we don’t know what that is either but we know it’s not computation). I also find it useful to think about this in the context of AI. The fact that AI has been successful in emulating humans’ cognitive abilities (e.g., simple addition) despite learning differently (concurrent rather than sequential training) demonstrates that weak equivalence is not strong enough evidence to draw assumptions of internal processes.

      Delete
    2. I didn't understand the point about self-reflection. Cogsci is trying to reverse-engineer, not self-reflect.

      About equivalence: Pylyshyn wanted strong equivalence, but it's not clear why. Turing only wanted weak equivalence, and that makes more sense.

      But of course some or a lot of cognition may not be computation at all.

      Delete
    3. The self-reflection I mentioned relates to the idea that the symbols we create can reflect our underlying thoughts, as each symbol carries meaning derived from our thought processes. Therefore, the question arises: can these constructed symbols, which serve as reflections of our thoughts, successfully undergo the reverse-engineering process to build our cognitive model?

      Delete
    4. Sorry, when I mentioned 'self-reflection,' I was referring to a form of 'one's thinking reflection,' where the symbols themselves are a reflection of our cognitive processes.

      Delete
  52. *reposting since I no longer see my comment* Defining cognition remains a challenging task, even after delving into this paper. I hope this is to be expected as we've only just begun the semester... However, it's becoming clear that it involves more than just computation; it's about giving meaning to that information. The reading suggests that cognition goes beyond computation, as seen in challenges like the symbol grounding problem, the need for interpreting data, and how it relates to our physical experiences. If we take Siri, for example, it follows rules to answer questions. That being said, what truly makes it cognitive is its ability to understand the meaning behind our queries. At this point in time, should we even bother defining which AI aspects are truly cognitive now, or await further advancements and refinements in technology?

    ReplyDelete
    Replies
    1. Why do you say Siri understands at all? It responds to queries. It can't do anything else. And it certainly can't pass T2 (let alone T3). What do you mean by "understand"? Does ChatGPT understand? Ask it; and then chat about it: Does ChatGPT pass T2?

      Delete
    2. I apologize, I miswrote. I meant to say: What WOULD make Siri cognitive is if it was able to understand the meaning behind our queries, etc...

      Delete
  53. Searle’s Chinese Room Argument has always been extremely interesting to me, especially for the implications it brings about if you think about it more carefully. Obviously it is an effect refutation of the Turing Test, demonstrating that it is an ineffective way to test of computational intelligence because a machine could theoretically pass the Turing Test without actually exhibiting understanding or intelligence. In essence, given an infinite computational capacity, a machine could replicate human intelligence where in fact it is merely following a predefined set of rules. However, the more interesting implication of Searle’s argument is that, since this experiment could be generalized to any arbitrary task, it seems that machines are inherently incapable of understanding anything. This is in direct agreement with Roger Penrose, who believes that consciousness is necessarily non-computational because, at the very least, one element of consciousness is absolutely non-computational: understanding. He believes, alongside Searle, that understanding and awareness is not simply a by-product of simple mechanistic processes implementable in a machine.

    ReplyDelete
    Replies
    1. Hi Stevan, I too find Searle’s Chinese room argument and its implications on the Turing test truly fascinating as well. I liked how you brought up Roger Penrose’s perspective on this matter. He does argue that understanding is inherently non-computational as his stance falls on the idea that consciousness itself involves processes that expand far past beyond computational abilities. Like you mentioned, Penrose does believe that consciousness is non-computational, however the true reasoning behind why he does believe so is truly interesting. A few of his points on how he shows that understanding is non computational regard humans innate ability for natural insight/intuition when it comes to mathematics and logic, and the ability to understand unprovable formulas or ideas. In other words, examples such as being able to understand complex scientific concepts without being able to manipulate them or give a correct answer, or knowing when there is an incompleteness within a theorem or argument shows a deeper level of understanding that computational processes are unable to replicate. Moreover, Penrose also states that the innate understanding of simple paradoxes that cannot be proved by a computer but instead can be understood by us is another idea as to how human understanding is non computational.

      Delete
  54. Read Week 3 (but read Week 2 first): Why do you say Searle refutes the TT rather than just computationalism? (You can save your answer for Week 3.)

    And what about T3?

    How does Penrose show understanding is noncomputational? What is understanding?

    ReplyDelete
  55. This paper reflects on historical views of cognition, beginning with behaviorism, and the notorious ‘black box’ problem, the ‘how’ do we arrive at an output from an input . When analyzing the cognitive capabilities of humans, it is clear that they extend beyond the scope of what reward/punishment learning systems behaviorism has to offer. It then takes a look at the role of computationalism, introspection, and the theory of mental imagery and their contributions, all of which cannot fully answer the black box question on their own. In the case of the retrieval of the name of my third-grade teacher, I can follow my train of thought (introspection) to imagine myself as a third-grader, getting an image of my teacher (mental imagery), then a name. However, this process still does not address the functional aspects of ‘how’ I arrive at this conclusion, but instead the products of steps taken towards this goal. With respect to computation, while it likely plays a role in cognition and can accomplish a multitude of tasks, Searle’s Chinese Room experiment proves it is not JUST computational. This paper was interesting in that it analyzed each of these models of cognition, and addressed the strengths and weaknesses of each of them.

    ReplyDelete
  56. The “impenetrability of mind” that is briefly brought up in the middle of the article stuck with me as I got to the end of it. In the last paragraph it is postulated that cognitive science can’t ‘succeed’ (or something like that) until we have achieved TT-3 passing status. Now bringing back in this idea of impenetrability and something I saw another student bring up about the importance of sensorimotor ability along with the human seeming-input-output nature of let’s say Searle’s Chinese room, we get a human-passing on the outside and ‘soul-side’, the illusion of an internal life? Yet is that illusion all that there is and all that matters? I think most of us would like to say no to that last question. I wonder if Harnad was saying there is some impenetrability of cognition even when achieving TT-3 passing status—there becomes some unknowability from the outside. We cannot stick ourselves in the room just as Searle does in his Chinese room thought experiment because we are ruining it by being there, inserting some outside consciousness on something where it has to develop on its own. I’m getting quite off the straight and narrow here and more into fantasy land, but I think it’s fun and worth something: is cognition merely the result of a complex enough network thats set up and set a-ticking in the right way, like a chaotic pendulum sent off from its starting point, who knows where it will go. We can try and predict and understand its movements but practically there is some unknowability to its nature.

    ReplyDelete
  57. Reading Cohabitation: Computation at 70, Cognition at 20 clarified a confusion/question that was raised for me with the 1a reading. As I understand it, in the Turing reading from 1a, the Turing machine required a manual or instructional guide to carry out the necessary processes. The computational nature of this machine is intended to represent the computational nature of human thinking, in which humans carry out cognitive processes based on inputs. The question this generates for me is what our instruction guide would be? Firstly, different people approach the same inputs in different ways based on inherent factors such as personality or thinking style. Take a riddle which involves a play on words. Someone who is more linguistically inclined may think about the riddle’s wording and solve it very quickly, while a more logical person may take much more time before getting it. Does the first person’s “manual” make them better at linguistic tasks? Secondly, one’s life experiences may affect the way they respond to stimuli. If I have had a negative experience with rats, seeing one may trigger a very visceral reaction and cause me to run away while many others would not react. This suggests that our “manuals” are malleable to changes. Finally, human beings to some degree can override our inherent reactions or the computations we carry out based on goal or intention. The Stroop test is an example of this: while I am automatically inclined to read the word and say that colour, I can override this computation and instead focus on the colour of the ink. Would this suggest that I am able to write my own instructions or change them based on my goal? But then how is this goal encoded and computed?

    ReplyDelete
  58. This paper is a survey of the different attempts to understand the mind, borne out of the behaviorist school's rejection of the relevance of the mind, "the black box", if not for the behavior that it outputs. The article moves through the cognitive scientist and philosopher Zenon Pylyshyn's conceptual arguments, as he attempts to avoid the infinite regressions involved in the homunculus hypothesis, or the idea that there is another mind, inside of our mind, interpreting the sensory stimulus/the input (infinite regressions because where would the homunculus' cognitive skills come from, another homunculus?) Pylyshyn's point of view initially clashes with that of the depictive representation theory, pioneered by Kosslyn, who argued that knowledge is directly represented through mental images. Pylyshyn runs into a wall when he realizes that his opposing propositional theory also needed the propopositions to be interpreted by a homunculus. He then takes the symbolic basis of his propositional theory further, moving towards a purely computational theory of cognition: the conscious manipulation of mental patterns and symbols becomes, according to him, analogous to a computer running a program. The article glosses over the reason why his denial of the existence of noncomputational structures was not tenable, but Pylyshyn runs into yet another impasse when he divides mental processes into computational, or cognitive, and noncognitive. While we have chipped away at the black box, another black box remains: what is noncognitive? And how do we determine the criteria that separate it from the cognitive? It could very well be possible that the "noncognitive" aspect of thought that escapes our awareness is actually computational, or runs like a program, but outside of our conscious awareness.
    Moreover, Searle's chinese room experiment proves that symbol manipulation through the mere understanding of a symbol system's syntax can simulate human language output. It then becomes clear that if the mind does apply computational methods, the computer's use of symbols cannot be grounded in sensory experience, or human cultural (subcognitive?) relation with the symbols.
    A similar dichotomy emerges: behavior and mind, conscious and subconscious, homuncular and symbolic, and finally computational/cognitive and something else, that we have yet to name.

    ReplyDelete
  59. One question that draws my attention is the symbol-grounding problem, and what the author’s response is, the “only way” between its internal symbols and the things that may be inferred from its symbols while the role of computation is under consideration during the process. I somehow have my own opinion. In human’s science system, there are always theories that integrate things into the knowledge system based on the symbol system. An abstract system must be given meaning through some kind of interpreter, a "definition”, for instance, a unit meter has nothing stand alone, but an interpreter could give its definition of describing a unit for distance. It might still be a compromise to utilize interpreter to understand cognition, though computation, the symbol system to express a abstract system, And modern statistical-based neural network can simulate one’s decision making based on input and output accurately, we may still rely on computation to simulate and interpret abstract cognition system.

    ReplyDelete
  60. This reading completely changed the way I understand cognition in the human brain. From what I learnt in PSYC 213, structuralism and functionalism seemed to align with what this paper highlights about explaining how behavior and physiological processes relate to each other. I think those concepts really do focus on discharging the homunculus, and explains why and how we behave from the physiological sense. But what I understood is that relative to Pyshylyn and Kosslyn's theories of descriptive vs depictive representation, discharging the homunculus cannot apply to depictive reasoning, which should mean that in theory, descriptive reasoning based on propositions that can be imagined would not invlove the homunculus. I think it kind of begs the irony that well if we need to imagine those propositions, at least in the context of imagery, we need the little man homunculus to mirror these images BASED on the propositions. So the only way we can discharge the homunculus, and to align with Pyshylyn's descriptive approach is to assume that his theory has a computational basis rather than a 'mirroring images' basis.

    ReplyDelete
    Replies
    1. You propose that to align with Pylyshyn's descriptive approach, we should consider that his theory might have a computational basis rather than a purely "mirroring images" basis. This is a valid perspective. Cognitive psychology has increasingly adopted computational models to explain how mental processes work without relying on a homunculus. These models involve the manipulation of symbolic representations, which aligns with the idea of "computational thinking."

      Delete
    2. Hi Marie,
      My apologies for engaging back quite late! But that is exactly my point, it is why I believe that the catcomcon course delves a lot deeper into the implications of basic principles that we learned in our core courses at mcgill, I have never considered that Pylyshyn's approach is computationalist so I never understood why or how the homunculus operates the way it does, I took this course specifically to address this and I am so glad to see that I wasn't going crazy for questioning the basis of those views ;-;.

      Delete
  61. In our quest to unravel the mysteries of the world, we are forced to encounter abstract concepts that go beyond the limits of human understanding capacities or that are simply awaiting discovery by science – thus leaving us with questions that may never get a definite answer. Such concepts include time, infinite continuity and many others. Cognition, the fundamental mechanism through which we seek understanding, is one of such enigmas.
    John Searle’s Chinese Room experiment did challenge the idea that a system able to process cognitive function by manipulating symbols cannot guarantee an actual conscious understanding. However, it doesn’t take away from the reality that some cognitive abilities can still be understood through computation. Cognitive functions like solving problems, memory and decision-making can be explained by the “ruled-based symbol manipulation” of computation; like the Turing machine. We would understand complex ideas or sentences through compositional semantics by interpreting the meaning of its small individual parts and combining them back following certain rules into an interpretable structure.
    On the other hand, sensorimotor dynamics such as internal analogs of spatial dynamics (which cannot be explained by symbolic computation) with neural networks must play a crucial role in understanding cognitive processes.
    Behaviorists weren’t wrong when claiming that introspecting “in an armchair” wouldn’t help the puzzle of how minds work; Symbolic computation makes sense for many cognitive functions; The symbol-grounding problem can be solved in part by the theory of sensorimotor dynamics and neural networks in order to link the external world with human interpretation. Many theories have been suggested and each of them has help us make more sense of cognitive abilities. Embracing the notion that not everything can be neatly defined or understood by the human mind allows us to navigate the complexity of existence with humility and wonder. Rather than succumbing to anxiety through relentless overthinking, we can find liberation in accepting the beauty of the unanswered questions that enrich our exploration of the universe.

    ReplyDelete
  62. I found that this reading really showed me how little we know about the human capacity of cognition. Through various attempts at trying to explain exactly how and why we are able to do, think, and feel, it seems like we are no closer than we were since Turing. What I found interesting was Pylyshyn’s point about the importance of the level of the “Virtual Machine” compared to the “Functional Architecture”. What I wanted to know with this point is that since computational states are independent of the dynamical physical level and therefore the physical details of the machine are irrelevant, when we apply this sort of thinking with cognition, how does this explain the difference in the cognitive abilities of between humans? Is it really that as humans, we are all on the same computational state level and the difference is just in other aspects like memory, speed of processing, and the likes of newer parts (younger)?

    ReplyDelete
  63. I thought this paper's exposition of the symbol grounding was very lucid and clear. I think the conclusion that human cognition cannot be computational “all the way down” was fascinating, and very valid given the implications of Searle’s Chinese symbol grounding problem. However, I am having trouble understanding what exactly the “dynamical processes” of the brain as opposed to its computational processes are. If dynamical processes are “real, parallel, distributed neural nets” (page 10), are dynamical processes analogous to physical neural networks, or the large-scale “wiring”, of the brain?

    Furthermore, I found the suggestion that we focus our energies on trying to beat the Turing test for all of our human behavioural capacities to be very good. If this is what we ought to do to gain a deeper understanding of cognition, I think perhaps the best thing to study from the perspective of cognitive science is human neuroembryology. This is because nature has already beat the Turing test for human behavioural capacities, and we can too, simply by making a baby. If we study in detail the neuronal development of human embryos, we should theoretically be able to read nature’s instruction manual for building a turing-test-beating cognitive device. With this research, we could attempt to make computational models of human embryonal neurodevelopment, and generate artificial Turing-test-beating intelligence.

    ReplyDelete
  64. To some extent, I believe the ideas described in this reading are quite limited. We have a much better understanding, today, of the conceptual confusions these great thinkers struggled with for decades. Repeatedly, during the reading of “Cohabitation: Computation at 70, Cognition at 20”, I was puzzled by why people could maintain these disagreements and latch onto assumptions that were not, I believe, justified.

    First, it is clear that Zenon’s criticism of behaviourism, and generally his ability to notice, and expose, question-begging (fake) explanations of cognition were well-founded, and that anyone claiming to “understand”, mechanistically, a process they could not recreate and precisely predict, either deterministically or probabilistically with precise distributions, was not using a strong, scientific, meaning of understanding.

    Moreover, it is clear that the functional frame by which theoretical philosophers and psychologists attempted to understand cognition is an incomplete one. Knowing the full set of inputs to a cognitive system, and an algorithmic, or computational, method for reaching the corresponding outputs, is not enough to get the precise explanation of the brain’s method of cognition; it merely describes *a* way one could go about passing the Turing Test for a particular (human, or non-human) mind. This, certainly, may give some flavour of an understanding, if the functional algorithm mapping the inputs to outputs is compressed enough to give some insight, to let the person seeing this algorithm memorize the functional structure easier than by just memorizing the input-output pairs of the mind’s thoughts.

    One might hope that a more compressed functional algorithm producing our human mind’s behaviour must contain “insight” into the brain-specific functional algorithm, which may well be more complex than that. Though this is likely, it is not necessary; an algorithm may be more complex than required, in an actual brain, for reasons of physical efficiency, or physical necessity. Therefore, though a functional mechanism that passes the Turing Test for a certain mind may be weakly equivalent to that mind, and have the message length of the Kolmogorov complexity of the human’s actual brain, this need not mean that, somehow, the “deeper understanding” of what the human executes, when doing cognition, is the compressed algorithm, nor does it tell us much about the effects of perturbation on this assumed-static system.

    Since weak-equivalence is likely possible, ie two minds with identical input-output pairs may have different internal mechanisms and different internal experience, we conclude that, knowing of the minimal program that encodes all our life’s decisions, we still have something to explain, for this maximally compressed mind need not have the same conscious experiences that we do, despite outwardly appearing to.

    ReplyDelete
    Replies
    1. Other confusions I found were:
      - An imagined dichotomy between cognitive processes and non-cognitive ones. The mind’s output display an appearance of self-awareness of some cognitive processes (conscious thoughts) and not others (identifying lines soon after light hits the retina), but there is no fundamental distinction there. All is just physics instantiating a structure of variable levels of consciousness (which we don't understand).
      - Dynamical vs Computational processes. A mind can change over time, dynamically, where every bit of evidence applied to the senses goes through the causal mechanism that leads to constrained expectations by the cognitive process, and this use of more evidence to update the causal mechanisms make them grounded in reality and constrain predictions of future experience. This process is a computation that updates, locally, the weighting of different synaptic pathways, in ways we do not understand; but it is clear that the processes are both dynamic and computational.
      - “Systematic semantic interpretability” of billion-dimensional minds is nearly impossible, since the semantic meaning of symbols is inherent, or emergent, in the structure of the causal connections between different nodes, neurons, or symbols, and that structure is extremely complex. The solution to the symbol-grounding problem is then completely mundane, but hopelessly complex. The information-theoretic bits we gain in an accurate model of reality, as the mind shares symbols or verbal concepts with us, is what it means for their symbols to mean anything; their symbols sharing a common structure with reality, such that they constrain our experiences in predictable directions, is what grounds the symbols in reality. The question is then not begged; the interactions between nodes or symbols are simply so complex, that they defeat “conceptual explanations” we can hope to understand. A superintelligence, however, could probably understand the complete conceptual structure of our minds, but perhaps not even its own. This is demonstrated by LLMs, which show understanding through being able to causally predict the influence of different interactions between concepts that map accurately to reality, and this is far from sufficient for our being able to read their parameters and understand, from the structure of their mind’s design, the extent of their knowledge.
      - Searle’s Chinese room argument seems to be insightful, and then Searle simply… misinterprets its insight completely? It shows that the hardware (Searle himself) need not understand the computation , nor should the software (the book), but we judge understanding insofar as the structure of this system leads to symbols having an ability to causally predict reality. Therefore, “understanding” is not inherent in the hardware, but a feature of the software, which need not be aware of its own understanding. Of course, claiming this Chinese book exists does not indicate the knowledge to create it, and therefore is, in its own sense, question-begging if it claims to “solve” cognition this way.
      There is more to say about this Chinese room argument, and my conceptual clarity for terms like "understanding" is lacking here (is it a conscious feeling? an outward-facing display?) , but I'll end the skywriting here anyway.

      Delete
  65. **BLOGGER BUG**: ONCE THE NUMBER OF COMMENTS REACHES 200 OR MORE {see the count, at the beginning of the commentaries] YOU CAN STILL MAKE COMMENTS, BUT TO SEE YOUR COMMENT AFTER YOU HAVE PUBLISHED IT YOU NEED TO SCROLL DOWN TO ALMOST THE BOTTOM OF THE PAGE and click: “Load more…”
           ________________
              Load more…
           ________________
                  ——
    After 200 has been exceeded EVERYONE has to scroll down and click “Load more” each time they want to see all the posts (not just the first 200), and they also have to do that whenever they want to add another comment or reply after 200 has been exceeded.
    If you post your comment really late, I won’t see it, and you have to email me the link so I can find it. Copy/Paste it from the top of your published comment, as it appears right after your name, just as you do when you email me your full set of copy-pasted commentaries before the mid-term and before the final.
                  ——
    WEEK 5: Week 5 is an important week and topic. There is only one topic thread, but please read at least two of the readings, and do at least two skies. I hope Week 5 will be the only week in which we have the 200+ overflow problem, because there are twice the usual number of commentaries: 88 skies + 88 skies + my 176 replies = 352!. In every other week it’s 2 separate topic threads, each with 88 skies plus my 88 replies (plus room for a few follow-ups when I ask questions.

    ReplyDelete
  66. Can we still say that ChatGPT operates based on computation when it had access to the "big gulp"? In other words, if we manage to make it pass T2 using an upgraded algorithm, should it be accepted as a candidate, or would it be disqualified for having had access to a vast amount of data beyond human capacity?
    In essence, should there be a condition that the TT candidates obtain their capabilities in ways within the realm of human capacities? (Big gulp would thus be a form of cheating as no human can memorize that amount of information).

    ReplyDelete
    Replies
    1. Hi Natasha, as stated in previous posts and in the reading, computation is a rule-based symbol manipulation system. Symbols, just arbitrary shapes that can be numerical, textual or graphical, are processed by specific algorithms to produce desired outputs. Regardless of the big gulp, ChatGPT is still computational as it takes in textual symbols as input and manipulates them based on the algorithms embedded in the model. It would have to break down texts into smaller chunks when analyzing and generating (and re-generating) responses based on its data. The big gulp would not change the fundamental computational nature of the model.

      In regards to the ethical portion of your comment, I can’t come up with an answer. But it makes me think about what it means to pass the T-test, where we would not be able to tell the difference between a machine and a human. So if ChatGPT manages to pass while having such a large database, would the existence of the big gulp matter if we couldn’t even tell that it was a bot through observation? Would it matter if that huge amount of data were just purely words and symbols with no underlying meaning/grounding within the real world?

      Just a side note: The big gulp makes me think of a person who is “gifted” persay, such as having a photographic memory and can thus store vast amounts of information in their brain compared to most people who don’t have this capacity. Or those with hyperthymesia, who have the ability to remember most events within their lifetime with great precision.

      Delete
  67. The reading clearly conveys the goal of cognitive science, compared to neuroscience or philosophy, which is to explain HOW we perform our capacity (i.e., reverse-engineering our cognitive capacity). The easy problem proposes the question of “how and why” humans do what they can do. I can understand the “how” aspect of this question, especially in terms of cognitive science’s goal and the process of explaining what cognition is. However, the part that has confused me since the beginning of the course is the “why” part. The answer to “why we do the things we do” seems to be seperate from the efforts for reverse-engineering (because how can we reverse-engineer a purpose?). I know we still haven’t covered evolutionary psychology yet, but I was wondering if the “why” question can be answered by Darwinian evolution. The reasons for organisms’ performance/behavior (whether sensorimotor or cognitive) are present because it was adaptive in terms of survival, thus it persisted. Overall, I’m not sure if the “why” portion of the easy problem is connected to a purpose (as an evolutionary function) or is it just another way to explain cognitive capacity? (In other words, can we reverse-engineer "why"?)

    ReplyDelete
  68. Re-posted: Valentina Martinez commented on "1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20"
    Sep 6, 2023

    I found this reading very interesting because ever since I took my first cognition class, I found myself agreeing with the theory that the brain and the mind work as a computer, with the brain sort of representing the hardware and the mind the software of a computer, and what we call cognition being computing and manipulating symbols in a kind of “middle step” that allows us to get to an output. Yet, when I got to the part of the reading that explained that a machine that passes the Turing test (TT) could do so without really understanding the symbols that it’s processing it brought back the concept of symbol grounding and how we can’t (yet) really explain how a machine is assigning meaning to the symbols that it’s processing and thus, we cannot explain cognition in terms of computations. All of this brought me to the thought that only human minds can systematically interpret symbols and that viewing the mind as a computer is not the right way to go about cognition because even if both people and computers can compute, computers cannot do what a human can do; they cannot have the human experience that us humans have.

    ReplyDelete
    Replies
    1. CATCOMCONSeptember 7, 2023 at 7:46 AM
      Now ask yourself why you are so sure computers could not have the human experience that humans have -- even if they could pass both the verbal (T2) and robotic (T3) version of the Turing Test (Week 2), Turing's reply will be "If you can no longer tell them apart, what difference are you talking about?" (This is called the "Other-Minds" problem, but it is a problem for philosophers, not for cogsci.)



      PS "Stevan Says" that computation alone cannot pass T3, and nothing that cannot pass T3 can pass T2: Has ChatGPT proved me wrong?


      Delete
  69. This article highlights the difficulty of addressing our ability to think, and more specifically to understand how we are able to do what we do, beyond the physiological point of view, that Skinner suggested. Even simple problems such as thinking about our third grade teacher’s name requires specific black box processes that can’t be assessed directly through introspection nor computation, or is it? The homunculus theory suggested a way of avoiding those problems, but it seems that the best way to face the questions regarding our capability to think is still computation.

    ReplyDelete
  70. *REPOST*: I cannot find my original comment

    Before getting into my main point, I think the idea of the homunculus, a little person existing in our head to explain all the concepts that we currently cannot explain any further, is very cute. If the homunculus theory were to be true, then there could be an infinite nested chain of homunculi existing. Now, I feel like I have finally started to grasp the movie, “The Matrix”. If humans were to be living in a simulation and created by something more intelligent than us, then the same could apply to our ‘creators’, and so on.

    Something that I was confused by was the concept of symbol grounding in humans. Is symbol grounding only rooted in our physical senses? Have there been developments regarding understanding different types of symbol grounding systems besides our senses? I felt like this week’s reading was able to capture the unseen complexity of what cognitive science is. Next time someone asks me if cognitive science is the same as psychology, I’ll make them read this blog, haha :)

    ReplyDelete
  71. **Repost **
    As a cognitive science student, I love thinking about thinking! I quite enjoyed this reading because it helped put everything that we learn in our different classes within different domains together. I quite liked the line “We are unaware of our cognitive blind spots—and we are mostly cognitively blind.”. This citation puts our understanding of human cognition into perspective, as it highlights our inherent limitations in our ability to perceive and process information introspectively and objectively.

    ReplyDelete
  72. Just as I was about to submit my skywriting file, I realized I somehow managed to miss this one. Although we're well beyond week one by now, revisiting this paper with a more complete understanding of Turing tests, computationalism, and the symbol grounding problem made me understand several things more concretely. First, although Pylyshyn may have gone too far with pure computationalism, computation is still an important and large part of the processes we mean when we say "cognition". Moving beyond computationalism does not mean chucking computation out the window, but acknowledging the other dynamical systems inherent to the hardware that are necessary to do as thinking things do.

    One of the things I like about acknowledging the role of hardware and biology is that it makes space for large-scale, top-down processes to have a role in the things we do as thinking things. This feels intuitive to me, that our world affects the way we think, but in a purely computational view there’s not nearly as much (if any) room for this. But I believe that taking hardware into consideration not only makes more sense in terms of explaining AWOL cognitive processes like psychosis or delusions, but also in explaining the influence of our beliefs and experiences in phenomena like placebo or priming.

    ReplyDelete
  73. I found the last part of the text the most interesting when it was argued that the email style TT is simply not sufficient to explain all domains of cognition because it is solely reliant on computation, which has been proven to be insufficient through Searle’s Chinese room argument. Harnad argues that scaling up the TT to a sort of robot version that is capable of interacting with the world through sensorimotor capacities is necessary for bridging this gap. Of course, this begs the question of how this can be done, but I think it is a very cool idea, and it diverts from the oversimplification of cognition that has been seen over and over again in computationalism, behaviourism, etc.

    ReplyDelete
    Replies
    1. From the rich table filled with insights from our class throughout the semester, it's really interesting to go back in time to take a look at previous skywritings and readings from earlier weeks. Megan makes really good points on urging us to consider the limitations of a purely computational perspective on cognition. The importance of incorporating sensorimotor capacities into the Turing Test framework opens a new chapter of cognitive processes. This integration could lead to a more holistic understanding of cognition that honors the complexity of the human experience. Based on what we just covered in class, would it be beneficial to try and incorporate an emotional aspect to the Turing Test as well? Would that at all make it more difficult to produce human-like consciousness? More importantly, is it even necessary, or is having the sensorimotor aspect enough?

      Delete
  74. In the paper Cohabitation: Computation at Seventy, Cognition at Twenty, a part of the paper discusses about what happens when we normally introspect, which is called the “mental imagery theory”. It mentions an example of recalling the name of the author’s third-grade school-teacher, where the author first has a mental picture of the teacher in his head and then names the picture. The author stresses that the process of remembering the teacher’s name was: by first picturing the teacher then identifying the picture, rather than remembering the name first. Then the author raises the fundamental questions, which are, “How do I come up with her picture? How do I identify her picture?”. Furthermore, the he emphasizes that, “we don’t notice what we are missing: We are unaware of our cognitive blind spots—and we are mostly cognitively blind”. These questions and remarks really surprised me personally, since after reading the example of the third grade teacher, I realized that whenever I was recalling the name of any person (friends, teachers, distant family members, etc.) I get an image of them in my mind first, and then remember their names. I never really realized that I am living with such cognitive blind spots, which came as a surprise to me while reading this paper.

    ReplyDelete
  75. Reflecting on the reading I grapple with the nuances of cognition, computation, and how they interact. The reading certainly underscores the complexity of translating our multi-sensory experiences into computational models. I find it interesting how cognition, often conflated with computation, transcends mere algorithmic processing. The real challenge lies in capturing the essence of human experience, the texture of a leaf or the roughness of bark, for example, beyond the symbolic representations of a machine. This dichotomy between human cognition and computational mimicry leads us to beg the question of if its even possible to encapsulate the depth of human sensory experience in computational terms?

    ReplyDelete
  76. Professor Harnad’s paper points out what is right and wrong about behaviourism and computationalism. He argues that cognition cannot be all computational, just as it cannot be homoculus or mental images.
    I was impressed by the idea that word learning is essentially symbol grounding. The word learning process requires infants to interact with the objects, extract the important features, and recognize the features when they see a new object.
    If my understanding is correct, to accomplish the same level of word learning, T3 is needed because this process requires sensory-motor skills, to interact with the objects. symbol-grounding heavily rely on sensory-motor skill.

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2023 Time : 8:30 am to 11:30 am Place :  Arts W-120  Instructor : Stevan Harnad Office : Zoo...