Monday, August 28, 2023

3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument?

Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.



Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).

84 comments:

  1. Professor Harnad presented a well-structured and comprehensive critique on Searle’s Chinese Room Argument. He broke down and redefined the three proponents of Strong AI, and then expanded on these tenets, examining Searle’s arguments. One argument I spent more energy on is the Systems Reply. The idea is that Searle doesn't understand Chinese, But imagine him, the room and the writings on the wall giving rise to a system, which itself understands. He then suggests letting the individual internalize all of these elements of the system to that individual then incorporating the entire system, and still does not understand. Can this argument go on and on? What is the point? Does it only show that cognition cannot be all just computational (implementation-independent implementations of computational states), because you can always imagine these states as being internalized into a non-understanding system, never giving rise to meaning?

    ReplyDelete
    Replies
    1. NOTE TO EVERYONE: Please read the other commentaries in the thread, and especially my responses, before posting yours so you don't just repeat the same thing.

      Delete
  2. One argument against Searle that is explored in this second reading is the fact that though conscious understanding might not be going on in the Chinese Room Argument (CRA,) UNconscious understanding would be. The counterargument here given in the reading is that making such a distinction between conscious and unconscious mental states in the first place only makes sense in the context of a conscious entity. I would add here, however, that the distinction between conscious and unconscious mental states is difficult in the first place: just as I am conscious of what I am conscious of, I cannot - by definition - be conscious of what I am unconscious of. Therefore, this goes beyond the other minds problem - how we can’t be sure of others’ mind-fullness/consciousness (interchangeable?) - because we can’t go inside their heads: in the case of unconsciousness, I argue that I myself can’t even be sure of what I’m unconscious of.

    ReplyDelete
    Replies
    1. Yes, most of HOW we are able to do the things we do are not accessible to introspection. That's what the 3rd grade schoolteacher example was about. In the same way, HOW we understand language is not accessible to introspection. But THAT I don't understand Chinese (and I do understand English, or Hungarian) is certainly something that I have Cartesian certainty about. And so does Searle (about English: not Hungarian!).

      That too is part of Searle's Periscope: What is that?

      Don't get lost in the Hall of Mirrors created by the weasel-word "conscious" or "aware": I am aware that you're aware that he's not aware that I'm aware that... etc. It's not so easy to engage in that kind of level-hopping mental vertigo and still make sense if you just talk about what you do or don't feel.

      Delete
  3. I do not fully understand is meant by “the soft-underbelly of computationalism”. It is discussed that the only way to experience another entities’ mental states would be to occupy the same computational state that that mental state was caused by. When I originally read this section, I thought that it related to strong/weak equivalences. If one entity’s computational state shared a strong equivalence with another entity’s’ computational state, they could result in having the same mental state. I do not know if I interpreted it correctly, but I do not see how this does not go directly against the first tenet of computationalism. If mental states are just computational states, I do not understand how one follows from the other.

    ReplyDelete
    Replies

    1. I may be wrong but the way I understood is that the term "soft-underbelly of computationalism" refers to a vulnerability or weak point in the computational theory of mind, which posits that mental states are computational states. The way you interpret strong/weak equivalences is interesting and would make sense where if two entities have strongly equivalent computational states, they should, in theory, have the same mental states. However, this notion challenges the first tenet of computationalism, which asserts that mental states are purely computational. If mental states were solely computational, then strong equivalence should suffice for identical mental experiences.
      The discrepancy you've pointed out reveals a potential limitation - if mental states are more than just computational processes, then strong equivalence might not be sufficient for capturing the full range of human experience and cognition. This discrepancy exposes a vulnerable aspect of computationalism, suggesting that the theory may require further refinement or even a paradigm shift to account for the complexities of mental states, thereby exposing its "soft-underbelly." But I also think I might be wrong in how I interpreted.

      Delete
    2. (Sorry to suddenly jump into the conversation, but I really want to get some clarification about T3, the other-minds problem and Searle's Periscope.)

      "T3 can't be all computational" - from my understanding, the answer could be found from tenet 2, as computational, or called function according to the later part of the article, are implementation-independent, while to pass T3 also requires the structure or hardware in addition to the other performance capacity.

      Searle's Periscope is considered the 'soft underbelly of computationalism,' because it seems like it only targets the function (mind or software or computation) but has no harm to the structure. It means that SP would be harmless if the machine passed the T3. In the case of cognitive science, reverse engineering, the structure or hardware should be prior to the function, which is crucial for computationalism; as long as we have the right structure, the function would then come up. The "other-mind" problem is here: it reveals that we cannot comprehend what really happens inside ourselves. For this reason, it is necessary to add T3, T4 and T5 to the foundation of the TT.

      Delete
  4. I had the same problem while reading this text. In fact, i didn't understand what it really means to "experience someone else's mental state". If mental states are indeed computational states, then why couldn't we share the same feeling if we reached that computational state and why would we want to prevent this from happening? Furthermore, regarding the tenet (1), can we assign some "mental states" to a T4 entity?

    ReplyDelete
  5. Harnad is examining the pros (thinking cannot be all computation) and cons (thinking is not computation at all) of Searle’s CRA. I don’t understand how Searle is arguing that thinking cannot be computation at all. The CRA has to be a hybrid system, the scratch paper and rules being the computation and Searle being the semantic interpreter. The Chinese version just shows that semantics is required (not a 100% computational system), but if it were an English version, the room would need both computation (what Searle is doing) and semantic meaning (Searle understands what he is doing).
    I guess I just don’t understand how Searle is saying there is no computation? Could somebody explain this? The way I understand the CRA test is that it has to use computation (not 100% but at least partially).

    What is meant by Searle’s Periscope? “a system can only protect itself from it by either not purporting to be in a mental state purely in virtue of being in a computational state -- or by giving up on the mental nature of the computational state, conceding that it is just another unconscious (or rather NONconscious) state” I have read through this passage a few times and I still don’t understand what is being said.

    ReplyDelete
    Replies
    1. Hi Kaitlin,

      I noticed Prof Harnad already responded to your question but here is the response I was writing yesterday but forgot to post!
      In the CRA, Searle is implementing a computation (following rules to manipulate symbols) to respond to Chinese questions. He suggests that it doesn't matter whether it's Searle or a created computational system (T2) executing this process. If computationalism holds, both systems should exhibit the same understanding. When Searle is apart of the system, he asserts that he doesn't understand Chinese. Therefore, if computationalism holds, then we can experience what other beings experience by using the same program.

      Delete
    2. Hello Miriam, thanks for the explanation!
      So in short, Searle's periscope is just that because computationalism requires implementation independence, and the CRA shows no implementation independence -> computationalism cannot be true. Is that right?

      Delete
    3. Hi Kaitlin,
      Hope it's fine for me to reply here! As far as I understood, yes, computationalism requires implementation independence. However, what the CRA shows is not that there is no implementation independence, but that Searle has no understanding when Searle himself is the system running the program (to write chinese characters). So if computationalism holds (implying that implementation independence holds), then no matter what machine is used to run the program, there cannot be understanding - since when Searle runs the program, there is no understanding.

      The machine running this program can pass T2, but it does not have understanding. According to computationalism, T2 is enough to show understanding. So computationalism cannot be true.

      Delete
    4. Adam, you're spot-on. But it's not just computationalISM that requires inmplementation-independence (which is the same thing as the software/hardware distinction), it's computation itself. (Look at the definition of computation in the PPTs and the lecture recordings. So, if cognition is just computation, then it too must be implementation-independent.

      Delete
  6. In the Chinese Room Argument (CRA), Searle has T2 as the candidate of the Turing Test (TT), which is the email version of the TT hierarchy. Searle concludes that cognition is not computation as it lacks the property of understanding. So this leaves me wondering, would T3, or T4 be a better candidate for passing the TT? (We shortly discussed this during last class, but we did not end on a conclusive answer). I believe that T3, with functional indistinguishability would be sufficient, as compared to T4 which allows for both functional and structural indistinguishability. Seeking structural indistinguishability is similar to Searle’s argument regarding the importance of understanding neuroscience to determine what computation is. This is rather only making things more difficult for ourselves, as with T3, there is still a lot of degrees of freedom (or uncertainty) left to reverse-engineer cognition without taking part in having to also reverse-engineer the brain, which would be required with T4.

    ReplyDelete
    Replies
    1. Good points. Searle's CRA only applies to T2, and only if passed by computation alone. Otherwis, no Periscope and back to the usal limits of the other-minds problem.

      Delete
  7. I had some trouble understanding why the Chinese Room Argument wouldn’t apply to a T3-passing machine, but I think I’ve figured it out now: a T3 machine is incompatible with computationalism, because T3 involves more than just pure computation (namely, sensorimotor capacities), and computationalism holds that cognition is ALL computation and nothing else. Only computationalism-compatible machines are proved not to have understanding using the CRA. This is because it is adherence to the tenets of computationalism which allows us to use Searle’s Periscope on a system to peak at its mental states. We can use Searle’s periscope only on systems that abide by the computationalist idea that cognition is an implementation-independent program, because this property allows us to run the program on some other hardware - namely, our own selves - and judge whether we possess the quality of understanding while running this program which is supposedly a complete model of cognition. Since a T3 machine doesn’t adhere to the tenets of computationalism, we can’t apply the CRA to it.

    ReplyDelete
  8. Professor Harnad deconstructs Searle’s argument, deriving we should believe that thinking cannot fully be ascribed to computation, but that computation is only a moving part in what thinking entails. We should be moving beyond pure computations or T2 passing machines only as they are not enough to explain understanding a language as per the CRA. However, this still leaves me a little confused about Searle’s robotics reply in the initial paper. He states that ‘“perceptual’ and ‘motor’ capacities add nothing by the way of understanding” (Searle, 1980, p.7). However, as I understand perceptual and motor capacities to be sensorimotor experience, would this not be a step towards T3 and not T2 where the CRA is at its strongest?

    ReplyDelete
    Replies
    1. Hi Ethan,
      I also struggled with Searle’s response to the Robot Reply. From my understanding, Searle is arguing that purely providing the robot with sensory and motor capacities (in this example the ability to see, move, interact with objects) will not necessarily lead to understanding or intentionality. I believe here Searle is questioning how we learn the association between symbols and their meaning. From his response to the Robot reply I believe his argument is that purely providing this sensory stimulus, but with no way to link these sensory experiences to the symbols which the machine manipulates, does not aid its ability to understand.

      Delete
    2. You're both right. Not only is Searle's Response to the Robot Reply inadequate, but the Robot Reply is inadequate. Sensorimotor function is not an add-on peripheral device for a computer. The brain, for example, is sensorimotor through and through; and even today's robots are not just computers with cameras and wheels. And even with no words to ground, animals without language can think, and they are not computers on wheels either. They can learn both categories and skills. The challenge of symbol grounding is to build on this sensorimotor capacity so as to connect category-learning with naming, propositions, and language. Stay tuned....

      Delete
    3. I appreciate this clarification as it highlighted to me again the difference between T2 and T3(which is everything we can do, but materially different). I am writing this reply a little late, so I have already read about and attended the classes wherein we discussed symbol-grounding, and an important takeaway is that we learn to sort and manipulate the world according to the kinds of things in it, and based on what sensorimotor features our brains can detect and use to do so. This is why chatgpt when asked if it is symbol grounded will respond "not in the traditional sense" as this requires sensorimotor capacities.

      Delete
    4. Josie, that's right, but ChatGPT is wrong: not just ungrounded "in the traditional sense" (which is weaselly): ungrounded.

      Delete
  9. While reading Prof. Harnad dissecting Searle’s Chines Room argument a question popped in my mind. The reading ends by acknowledging that Searle helped his field open up to new concepts like embodied cognition. Searle does go to far by saying that cognition isn’t computation at all and I find believable the concept of hybrid cognition/computation. But isn’t the entire discussion about how a computer could simulate/replace cognition? If we settle on this hybrid, then everything is right from embodied cognition to computationalism.

    ReplyDelete
    Replies
    1. Computationalism ("Strong AI") was the hypothesis the cognition is only computation. If it is hybrid, then computationalism is wrong. (And what do you mean by "simulate": the task is to reverse-engineer, not to simulate.)

      Delete
  10. What I wonder is from what are based the different theses of the mind. For example, the tenets that have been reformulated to fit computationalism instead of strong AI: how do we get to these principles, especially the first one that states that mental states are just implementations of the right computer program? Are they conjectures that we are trying to refute or accept or are they conclusions drawn from empirical research?
    But on the reading, I think it’s interesting to see the nuances in the Turing Test and its real-world applications, and that in one species, passing T2 might not mean the same thing as passing T2 in another species, due to different levels of functional complexity. I think it forces us to reconsider the question of “what does passing the TT mean” and does the machine have to look externally the same as a human to pass it, especially considering the hardware-independence tenet.

    ReplyDelete
    Replies
    1. Computationalism is the same as "Strong AI." And it is not a "principle" but a hypothesis, to be tested, about what mechanism could DO everything a thinker can DO. Whatever mechanism can pass the TT can pass the TT, which means the reverse-engineering has succeeded in explaining how to produce cognition.

      The appearance is not part of the TT, but facial expressions, tones of voice, and what the body has to be able to DO are part of T3. T4 is another question...

      Delete
  11. I do agree with the fact that Searle's argument overlooks the fact that understanding could emerge from a system-level perspective, even if individual components (like the person in the room) don't possess understanding. In other words, the system as a whole might understand Chinese even if the individual components don’t. Also, don’t people understand things differently? Doesn’t culture also play a role about how people perceive the word “understanding”? How do we even agree on what the word understand means? Doesn’t everyone have a different definition for that word. But I agree with Searle in the sens that consciousness can't emerge from computation alone, and I do not think consciousness will ever be able to truly exist in a machine.

    ReplyDelete
    Replies
    1. You have not yet understood the CRA. You are just repeating the Systems Reply, which Searle refutes. (How?) Please read the other comments and my replies in 3a and 3b. Also about "understanding."

      Delete
    2. Hi Marine! After reading Searle’s paper in 3a I was also initially interested in Searle's argument, and the Systems Reply. However there were a few things Professor Harnad touched on in the reading that made me reconsider the Systematist approach. One of the issues with the Systems Reply ties into conscious vs. unconscious understanding. The Systems Reply opens the door to “unconscious understanding” with a “conscious” understander, or subsystem that understands, somewhere within Searle himself in the CRA that he is unaware of. I could be wrong, but I interpreted this as similar to our discussion of the homunculus explanation for thinking of our 3rd grade teachers - it just shifts the question over to another sub-layer. Systematists can also argue that the whole system understands Chinese in the CRA, even if Searle doesn’t, but that doesn’t really get to the core issue of whether that “understanding” is computational or not.
      I also think it’s important to focus in on “understanding” in this case. It’s for sure a weasel-y word, but the way you reference socio-cultural influence on understanding is I think a bit macro for this specific reading. I think it’s helpful to just think about it in terms of understanding vs. not understanding the Chinese symbols in the CRA.
      Please let me know if I’m wrong about anything - this is just how I understood (apologies for the weasel word) the various refutations of the Systems Reply in the reading.

      Delete
    3. Hi Lillian, hopping in here to thank you for your thoughts on the systematist approach. I found myself coming to similar conclusions after reading Harnad's critiques in 3b; the idea seems to present an 'easy out', but really just shifts the problem and peels back another layer of convolution of the proverbial onion.

      Delete
  12. From this reading, the main point that was clarified for me was that the Turing Test being completed does not reveal that computation is cognition. The quote “This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that failing it is a guarantor of lacking one. It just means that we cannot do any BETTER than the TT, empirically speaking.” made me come to this conclusion. Additionally, the duck comparison confirmed my understanding that since we cannot become another being or feel what another being is feeling, reverse engineering is the closest we can come to understanding cognitive function. From my knowledge, it is said that due to the “other-minds” problem, even if we create a recreation of a duck that we can not tell the difference between a real one and the reconstruction, we are still missing a component that is not observable, feelings/emotions. The reconstructions and even many T tests only focus on what each organism/machine can do and not feel.

    ReplyDelete
    Replies
    1. The TT includes T2, T3, and T4. The CRA only refutes T2, if passed by just a computer, computing: How does CRA refute this; and why can it not refute T3 or T4?

      But Turing points out quite explicitly that the TT can only solve the Easy Problem, not the Hard one. How and Why not?

      Delete
    2. The CRA is able to refute T2 because Searle (as the machine) is only computing verbally in his experiment. He can have all the shape-based rules (algorithm) in his head to pass the T2 in Chinese without the feeling of knowing Chinese. The CRA would not work for T3 or T4 because those tests require a machine that has symbol grounding and can do verbally as well as robotically everything like a human in the external world. This would not work for Searle because he would attach meaning to referents during his symbol grounding. This learning adds the feeling of knowing Chinese and connecting Chinese symbols to referents. These cases would not be able to prove that cognition is not just computation because the test cannot separate computation from feeling.

      Delete
  13. What I liked about this paper was how it cleared up the strong AI arguments Searle had made, and how Harnad later related these arguments to Harnad’s earlier T2, T3, and T4 examples. The strong AI argument is firstly that mental states are just implementations of computer programs, secondly that computational states are implementation-independent and that third we can’t do better than the Turing Test for testing the presence of mental states. It is a lot easier to disagree with strong AI when the arguments are put in this way. For example, the third point on the TT test doesn’t make sense because the TT test only measures the performance capacity or functional properties of the machine. It is testing a machine in the T2 category.

    ReplyDelete
    Replies
    1. Good summary -- except that the TT is not just T2, it's T2-T4. And that really is the most that Cogsci can do to reverse-engineer cognition (i.e., solve the Easy Problem but not the Hard one.) Do you have any other ideas?

      Delete
  14. Correct.

    We'll learn more about the "hybrid road" in the weeks to come, about the symbol grounding problem, category learning, categorical perception, and language.

    ReplyDelete
  15. Neural nets can be just computational too, but sensorimotor function (and any other analog function) cannot. So if cogsci successfully reverse-engineers a sensorimotor robot that can pass T3 (a hybrid system, hence immune to the CRA snd unpenertable by Searle's Periscope), then cogsci has an explanation of how thinkers can DO what they can do (the Easy Problem). But the Hard Problem is not solved.

    ReplyDelete
  16. As argued in the text and in class, the Turing Test was designed to assess indistinguishability in a machine’s performance from that of a human. However, Searle’s Chinese Room Argument takes his own non-understanding of Chinese, to prove that a computational program is similar and does not understand what it is producing. This disregards the existence of conscious and unconscious automatic understanding processes taking place.

    ReplyDelete
  17. The duck analogy clarified a lot for me, especially in the distinction between a T3 and T4 machine. However, I wonder if we are being a little hasty by putting aside the “microfunctional continuum between D3 [functionally indistinguishable] and D4 [structurally and functionally indistinguishable]”. I understand that we have a lot of degrees of freedom open when we talk about structure/function coupling in the sense of webbed feet/swimming or other (relatively) simple causal-mechanical phenomena. But when we talk about brain structure/function, should we not be more cautious of this microfunctional continuum, since our degrees of freedom are much more constrained?
    I understand, and agree with, the fact that we don’t necessarily need to have the brain to reverse-engineer cognition. However, a lot of the human brain’s function comes from its structure. If we can hypothetically figure out the structure of neural connections in the brain, such as the aims of deep neural networks (DNNs) in computational neuroscience, would that not give us the answer we need in regard to both the conscious and unconscious understanding that we experience?
    (I guess this means that I am more in favour of T4 than T3 in its ability to be indistinguishable from a human in its cognitive capacity)

    ReplyDelete
    Replies
    1. Hi Paniz,

      Generally speaking, I completely agree that we are hasty in setting aside the microfunctional continuum between D3 and D4. All evidence points to cognition being an emergent property of the brain and so I believe you are correct in saying our degrees of freedom are limited in that regard.

      However, I believe in the Duck example, we are espousing tenet 2 of computationalism which trivializes structure for the most part. In this example, the only structures that would be relevant would be the ones required to achieve D3, this presupposes the requisite “brain structures” to replicate function.

      Though computationalism cannot be the case so you are right in every other sense.

      Delete
  18. Ok. So I agree that passing the T2 TT is enough to explain cognition. I also agree that going so far as to reverse-engineer / clone a human brain (T4) is over-reaching… however how does a hybrid T3 model account for the symbol grounding problem? Or even the other minds problem? How do we know that a machine that processes sensorimotor information actually assigns meaning to the symbols it is processing? And what does this tell us about the hard problem of cognition? Harnad suggests that neural nets have more potential to explain cognition. If we accept parts of Searle’s argument, that cognition is a biological phenomenon, we must consider the biological organ which produces these thoughts and feelings. The brain is highly connected into precise networks between different nodes that process external sensorimotor information for the basis of thinking/feeling/cognition in higher order association cortex... I still feel like this answer’s the easy problem but not the hard problem…

    ReplyDelete
    Replies
    1. Hi Kristi--you might be jumping the gun a little by agreeing that passing T2 is enough to explain cognition. The CRA argues that passing T2 is exactly not enough to explain "cognition" (by which I mean thinking, understanding) by showing how Searle himself could pass T2 in Chinese without actually understanding Chinese. Harnad did mention the potential of neural nets for explaining thinking, but unless I'm mistaken, they wouldn't have the same potential without sensorimotor abilities. As for T4 being over-reaching, I think you point out some good questions about how exactly T3 would be satisfying for our explanation. It seems like a T3 would have to use its sensorimotor abilities to learn by being embedded in the world, but you would still have to assume from its behavior that it's grounding symbols in meaning in the world.

      Delete
  19. I found it curious that one of the early critiques of Searle’s argument was that it would prevent funding for AI research. It seems that instead, Searle’s explanation should be used to refine the methods we use to form artificial intelligence. Along this line of thought, the article mentions a hybrid model in which computation is involved but the system may not be completely implementation independent. To formulate such a model it is interesting to look at animal behavior, as we cannot know what their version of thinking is, but we do know they can have very complex behaviors and can learn/be trained and evolve. This shows they can respond to external environments and learn and change their behavior from whether the internal outcome is beneficial or energy sapping. However, unlike the implementation-independent neural nets that learn and backpropagate based on reinforcement, animal processes are not implementation independent, in terms of their evolutionary abilities to not only learn but physiologically change over time and generations. This evolutionary change shows that what may be missing in machine learning is for the physical implementation of the computation to be able to respond and process changes in the computation as it learns (not just for the physical implementation to visually resemble humans as in T3).

    ReplyDelete
  20. Something that I found quite interesting about this reading was the statement that “passing the Turing test does not guarantee having a mind, and failing it does not guarantee lacking one”. The last part of the sentence is somewhat contradictory to me because, isn’t the point of the Turing test to allow us to know if we are talking to another human or if we are talking to a “machine” (computer program)? But I think that I’m taking a ‘computationalist’ perspective when I say that because if I understood correctly, computationalism beliefs that the Turing test is decisive.

    ReplyDelete
    Replies
    1. According to my understanding, I think that section is referencing the limitations of reverse engineering. Even though the Turing test is the best we can do right now, it can’t be completely decisive unless you’re only looking at it from the perspective that cognition is computation, which only takes into account the function. I do agree with you in the sense that if you believe that the Turing test is always decisive in determining whether responses are human or computer generated that you are following the computational view.

      Delete
    2. From my understanding of the reading, T3 would be enough to reverse-engineer cognition itself and bridge that gap. It is not only indistinguishable from us in its pen pal capacity but also its full sensorimotor capacity. Therefore it could not be simulated by Searle without him being the system. By being embedded in the world, we’d have to assume from what it does that it’s grounding symbols in meaning.

      Delete
  21. This reading really clarified to me the problems I had when reading Minds, Brains, and Programs. I couldn't clarify to myself why Searle's rebuttal of, and indeed the explanation of the robot reply seemed insufficient, but I now feel like I have a grasp on it. The robot reply seems to suggest that the only thing sensory and motor functions could do is to generate and respond to symbols that the brain manipulates, but this is another dualism. The brain is itself a dynamic system and one that interacts with the rest of the body. To suggest that our understanding of the world is not fundamentally connected with our ability to interact with it seems obviously incorrect.

    ReplyDelete
  22. I enjoyed reading the historical context of that paper, which gives a glimpse into the research world. The argument was clear, and I don’t think I have much more to add. What bothers me, however, is that we are still using words like 'consciousness' and 'understanding' without any operational definition, assuming that we already have an implicit idea of what it's all about (which seems to be the case). How are we supposed to think about whether a computational machine can think, understand and be conscious without defining what it is? It appears to me that 'consciousness' and 'understanding' are used in the sense of subjective experience and feelings (which constitute the hard problem). However, 'consciousness' can also be understood in the sense of the easy problem, as a cognitive capacity that has functional properties. From my understanding, this is mainly what the different theories of consciousness are about. I’m sure (and I hope) that’s something we will discuss later in the course.

    ReplyDelete
    Replies
    1. I agree with you. This ambiguity surrounding the definitions of these critical concepts makes it difficult to understand how we can evaluate the capabilities of computational machines in terms of thinking and consciousness. The "hard problem" and the "easy problem" needs a more precise framework to assess these cognitive capacities. That being said, I think that this is one of the reasons why Harnad, and even Searl and Turing, are addressing the question by focusing on observable behaviors and functional capacities (measurable outputs and problem-solving abilities). In future experiments, maybe developing criteria based on these observable outcomes could help in the evaluation of whether computational machines can think/exhibit intelligent behavior/perform effective problem-solving. In such matter, we can thus bypass the need for a complete definition of 'consciousness' or 'understanding.' This approach, while not solving the problem entirely, provides a practical way to assess and compare the cognitive capacities of computational systems without delving deeply into those abstract philosophical debates

      Delete
  23. Tenet 2: implementation independency, it is not the hardware(brain) that matters and it is the software (mind) that matters; I have trouble understanding. How can "the physical details of the implementation [be] irrelevant to the computational state that they implement?" The mind's capacity to think, understand, hold beliefs, and to 'feel,' is in direct correlation to the physical medium in which it operates (that we know of).

    One of the reasons we cannot be conscious about every single piece of information in our minds is due to the brains limited capacity to hold such vast information at a given time. But it could be that the mind, even without the brain, is limited. The mind only operates in parallel with the physical chemical components of the brain. So how is it possible that we can separate the structure and its function, when there is no function without its corresponding structure? And if we change the structure, wouldn't its function also change accordingly?

    ReplyDelete
    Replies
    1. Hi Elizabeth, when you say "the mind only operates in parallel with the physical chemical components of the brain", I think it is for exactly that reason that the "limits" on the mind which you speak of are also determined by the "program" of the brain; that is, the "dynamical system" that humans are endowed with that hosts the "computer program" which we think of as cognition (see the part of the reading on the combination of tenets (1) and (2)). In fact, from my understanding of the reading, the capacities of the mind (and, complementarily, its "limits") ARE the computer program. We can separate the "structure" and its function because as long as a computer of some other substance than the human brain can implement the same computational states, it is functionally equivalent: mental states=computational states. Perhaps where you might be confused is that "structure" in this reading refers to the computer program and is distinct from the physical implementation (i.e., brain, computer, both of which we want to host the SAME structure/program).

      Delete
    2. I think the point of the claim that “the physical details of the implementation are irrelevant to the computational state that they implement” is that the same cognitive processes be implemented in any physical system, so long as that physical system can implement the relevant mental/computational states. The claim does not imply that the algorithm and the physical system that implements it are not deeply intertwined, but rather that different physical systems can implement the same algorithm.

      However, to your point, I would argue that the mind’s capacity to think, understand, etc. is NOT directly tied to the brain. Perhaps these cognitive capacities are indirectly related to the brain, in the respect that our mental abilities are emergent from the collective activity of the brain, rather than tied directly to any specific region or collection of neurons. It could be that the two are not directly related at all, and one emerges from the other rather than being instantiated within it in a specific manner.

      Delete
  24. Professor Harnad argues that although Searle's CRA highlights the shortcomings of relying on computational methods to explain cognition, it does not provide a definitive answer when it comes to fully explaining the fundamental nature of cognition. It was interesting to see that Searle's argument has actually spurred exploration into embodied cognition and situated robotics. It also led to an acknowledgment of the necessity for hybrid models that integrate computational procedures and sensorimotor experiences. Searle also believed that CRA undermined the reliability of the Turing Test as a definitive indicator. Harnad helped me understand that the Turing Test was never regarded as infallible or a definitive proof of understanding. In addition, he made me realize that Searle's critique is primarily aimed at the T2 level, not T3 or T4. It is also only effective when dealing with scenarios involving candidates that are implementation-independent and purely computational in nature. This means that the CRA wouldn't apply to non-computational T2-passing systems. It would also be ineffective for hybrid systems that integrate computational and non-computational components.

    ReplyDelete
  25. Same here - If the reason why AI could not "understand" (which professor later conceptualized into "conscious understanding") in the Chinese Room Argument is because it was lacking the T3 sensorimotor aspect - then Searle could be right in a sense. Yet he did not refute the tenet on computation established earlier based on T2, which I find contradictory.

    To add a point of mine, I was wondering then, why Searle's paper still got refereed (favorably), accepted and published then if there were such fallible parts. Was his thought based off CRA much of an innovative argument at the time? Was it what the general public wanted to hear about? Or it's simply an indispensable part of machine learning? I would be curious of the nature of psychological discussions and criticisms, especially before we find a solution(?) to the other-minds problem anyway.

    ReplyDelete
  26. The role of computation is too hard, when i find some related work on CRA, I found a interesting website of computation that may help understand:https://plato.stanford.edu/entries/computation-physicalsystems/

    ReplyDelete
    Replies
    1. The argument I am concerned about is this sentence: "This does not imply that passing the Turing Test (TT) is a guarantor of having a mind or that
      failing it is a guarantee of lacking one." From this point of view, I think that if a robot that can pass the t4 Turing test appears, or a real human being, even if he does not understand Chinese, can pass multi-modal, For example, if vision, taste, etc., are modeled as discrete outputs, then they have the potential for intelligence. For example, a robot can correspond to an object's visual, auditory, tactile, olfactory, and other states through search-based knowledge. As a Chinese who understands Chinese, I think he has "understood" Chinese, and I will also think that he understands Chinese cognition because it is consistent with my understanding of Chinese.

      CRA is even a concern of own consciousness. Thinking back to when I was learning Chinese at age 6, my grandfather would write the words and the pictures they represented on cards, and I would match them. So, was I also learning and actually understanding Chinese?As a result, am I a robust AI, an actual human, or a weak machine? Therefore, Being too philosophical is one-sided. Cognitive science starts from usage. As Dr. Henard pointed out, reverse engineering helps us establish scientific methods to build cognition. TT is a scientific compromise based on existing research. Blind denial will only fall into the philosophy trap.

      Delete
  27. Here’s one of the things I took from this reading:
    Professor Harnard mentions two tenets of computationalism:
    (1) mental states are implementations of computational programs
    (2) the way they are implemented does not matter
    This would naturally infer that any implementation of a program would be the exact same as any other of it’s implementations, which would then address the “other minds” problem. So if one implementation is missing something, the other implementations would have the same issue.

    ReplyDelete
  28. Hi everyone!

    After reading "What's Wrong and Right About Searle's Chinese RoomArgument?" by Harnad, I believe that even though Searle's assertion that cognition cannot be solely equated to computations might be considered extreme, he still proves that cognition is not only computations, and that the reverse-engineering of cognition necessitates properties other than computation, including the experience/feeling of salience and understanding. I think from this, we could maybe conclude that only the right dynamic system with specific properties could “create” cognition instead of just any dynamic system.

    ReplyDelete
  29. Would it be possible for the professor to expend more on the claim that: “There was even a time when computationalists thought that the hardware/software distinction cast some light on the mind/body problem”. Why the use of “even a time”? What arguments have been found to deny that claim?

    ReplyDelete
  30. Cognition is not just computation since computation is not implementation-independent and cannot truly understand symbols (input and output of Chinese symbols, like a non-Chinese speaking person would classify the symbols according to rules, without understanding the symbols). This reading makes me look forward to reading about embodied cognition and the "hybrid road” from what I understood would combines elements of computational models (like neural networks, computational model inspired by the structure and function of the human brain) with a grounding in the physical "sensorimotor world".

    ReplyDelete
  31. The professor's notion of "dynamical systems" is pretty abstract for me. This is my impression of what it is: something separate (opposite in this context) to an algorithm, which does not demand any extra information other than the input it receives, something that keeps changing and adapting to the environment it is thinking in. A T4, in this sense, if the structure and function of the computation is identical to a human's, would be computing (or not) with a dynamical system. Am I on the right track?

    ReplyDelete
  32. This reading made the CRA a bit more clearer. Since Searle cannot understand Chinese in the CRA, neither can the other implementations can understand it, thus moving away from computationalism (because computationalism, and computation, requires implementation-independence). Implementation-independence suggest that the physical properties of the hardware doesn’t make a difference, as long is the hardware can computationally pass the TT. I am still a struggling to fully understand the other-mind problem: If computationalism is true, we can expect the following: Since a T2 passing (in Chinese) entity is bound to focus on formal rules for word manipulation, a Chinese-understanding cognitive state should be only manipulating symbols as well. But the problem is we don’t know how any cognitive state understands (feels like to understand) Chinese, since no one can know (or observe) the subjective experience of feeling or understanding of another entity?
    In addition, I was just wondering the concept of "understanding a language": Is understanding based on the meaning attached to the words (symbols), like individual categorizations? Wouldn’t a person who understands Chinese “feels like to understand” because one knows the meaning/semantic context in which the words (symbols) refer to? (Yet at the same time we can never know how one feels like to understand). Is Searle saying that a computational state will never get the meaning embedded in the formal structure of words?

    ReplyDelete
    Replies
    1. Your first wonder regards the "Other Minds Problem". And since this problem touches on the incapacity of everyone to access to others' minds to considerate if they think in the same way/perceive and understand life in the same way, it is hard nowadays to really answer that question...
      Regarding your second question, my understanding of the concept described by Searle as "understanding" regards the ability not solely to understand the meaning attached to words and symbols but also to appropriate it to yourself. And this is where, in my opinion, as you already mentionned, it is tricky: it's that there is a limit into the access you can get to a person's inner appropriation of a concept, since you are not in their head.

      Delete
  33. The second part of your comment caught my eye and I agree with that doubt you have about Searle’s periscope. I know I am personally vulnerable to falling into a “systems reply” type guy where I’d argue something about the complexity not being there (Prof. Harnad briefly mentions that in this paper), yet the “same computational state” in the CRA and further Searle’s periscope seems a dubious claim because it is muddied up by Searle himself being unable to perfectly be in the same computational state by virtue of being a messy human who is a semantic interpreter (someone who himself feels). To clarify, he cannot possibly be in the same computational state as a simple (well admittedly quite complex because it passes T2 TT in the CRA) Chinese-response algorithm because his brain is still much to complex. It’s as if he’s (weasel word incoming) mimicking the computational state and asserting that he is actually embodying that same computational state when of course he is not. (I apologize for the admittedly quite unclear comment)

    ReplyDelete
  34. I am struggling to understand the reasoning of one of the paragraphs.

    In this paragraph the author refutes the objection that understanding need not be conscious. Or at least that language understanding isn't. In particular I struggle with this point:

    "Unconscious states in nonconscious entities (like toasters) are no kind of mental
    state at all."

    By bringing it up the author shows an assumption that machines are not conscious. The logic then, would be that since machines are not conscious they have no conscious state and thus no conscious understanding...
    This seems a bit circular to me. Please correct me if I'm wrong!

    Also, I would like to ask if our understanding of language is truly conscious?
    In the use we daily make of it, it seems more like an "automatic capacity". I am not "consciously" enabling my language right now in order to write. Unless, it is conscious because we hear the words in our heads? But then, what of people with no internal monologue? Maybe it is conscious because we can do it
    seemlessly. But, so are a lot of unconscious things like walking...

    ReplyDelete
    Replies
    1. I also read this portion of the reading and thought the argument was quite circular in nature. I thought if something is not conscious to begin with then it would be difficult to argue it has any form of conscious states. However, maybe this is the authors purpose of this statement, it is meant to be very clearly obvious and self explanatory, thus being an effective refutation of the idea that understanding need not be conscious. Regarding your point on language, I think there is something to be said for the fact that despite not directly enabling language or walking, we still can in fact become conscious of it and affect how we do them.

      Delete
  35. I really liked this response to Searle, particularly the explanation of strong AI. This explanation became clearer through the highlighting of three key elements: first, that mental states are essentially computational states; second, that computational states must have a physical implementation, although the specifics of that implementation are irrelevant; and third, that the Turing test serves as the most robust empirical test because it focuses solely on the functional structure of the system. This perspective is supported by the argument that the specific physical implementation is inconsequential, making the function—rather than the structure—of the reverse-engineered object the sole relevant aspect in determining whether it is similar to the original.

    ReplyDelete
  36. I found this text very interesting, and it helped clarify some confusion I had after reading Searle's paper. I agree with Harnad when he says that Searle was “over-reaching” when he says that the CRA shifts the focus of research from computation to the brain and its functions. When I read this conclusion in Searle’s paper, I also felt that it was a very close-minded way of looking at it. Then, Harnad brought up the discussion of hybrid approaches to reverse-engineer cognition, and I thought that was very interesting. I am intrigued to learn more about the possibilities of combining computational and sensorimotor components of cognition.

    ReplyDelete
  37. Prof. Harnad’s article helpfully clarified the relationship between understanding and consciousness with regards to the Chinese room problem. I found the criticism of the CRA that one might have an unconscious understanding of Chinese initially interesting, but I think the objection that for a mental state to be truly mental, it has to be conscious dispelled that. The equation of the terms ‘conscious’ and ‘mental’ helped me gain a better understanding of the fundamental implications of the CRA.

    On a different note, I found this paper interesting as an insight into the way philosophical discourse is carried out! It was interesting to see how papers are reviewed and published, and how thinkers like Prof. Harnad and Searle engage with one another’s ideas on personal levels, and also across various academic platforms such as journals, and formal paper commentaries.

    ReplyDelete
  38. Is there, by chance, some use in thinking about "understanding" in the context of rule making? In Searle's example, inside the room it is already the case that there is a rule book. Rules (or programs), are not created in vacuum, and to make rules (from subjective experience) it seems to be the case that one must understand at least something about the result of putting a rule in place (aka the output). To dovetail onto Searle's argument that computation alone falls short in manifesting "understanding", could it be the case that the ability to create the rules for a given system itself is one of the magic ingredients required?

    ReplyDelete
  39. After the reading and lecture, I understand that there is no priority between the sensory network and the motor network. “Doing” is the essential part, it works as motivation through the categorization process, while being able to sense gives the chance of error-correcting feedback through supervised learning, which is how most of the categories are gained. Sensorimotor networks give the capacity to humans, and all the potential categories should fall into the range that is decided by the capacity. This leads to the argument that only the T3 robot can give the sensorimotor capacity to pass T2, and the same logic follows that by passing T3, the T4 robot is needed. So in the end, the T4 robot is necessary to solve the hard problem.

    ReplyDelete
  40. Professor Harnad’s article made me rethink what I understood from Searle’s article. I wasn’t horribly off, but I did make some conceptualization errors. I was mostly intrigued by the term “embodied cognition” and would love to talk about it. Computationalism doesn’t appreciate the machine, only the program. But the machine is also very important. Embodied cognition has some relatives called Embedded cognition, Extended cognition, and Enactive cognition. Embedded cognition is more about conceptualization and how putting ourselves in a suitable physical or social environment can help us put less effort into cognitive tasks. Extended cognition, is based on the enhancing effects of environment and social resources on our cognitive abilities, and argues that it is more than just a useful tool, adding on to the embedded cognition definition. Lastly, enactive cognition is a bit more out of my comfort zone but I will still attempt to explain it to add more to the discussion, if I can. Enactivism supports the idea that cognition happens through sensorimotor activities hence trying to naturalize intentionality in human behaviour. In the simplest terms, enactivism tries to strip cognitive science of mental content and challenge the cognitive science field. I guess according to enactivism, the CRA could be a demonstrating example without going into the intentionality and learning aspects of it.

    ReplyDelete
  41. Coming back to this paper, after more lectures where I could elucidate my understanding of T2,T3,T4, there's still a couple of things that nag me.
    These questions started as a terrible joke with my friend in the class, on the line of "T3 isn't just a computer with a camera on wheels", but now I am thinking about it much more seriously, the joke was: "Is Stephen Hawking T3 grounded?"
    Now of course he's brain-wise and body-wise MUCH closer to you and I than a robot, and he of course once had a fully functioning motor system with which he could learn with as a child, but some individuals are paralyzed from birth or an early age, how does that play into T3-4 grounding? can the sensory be enough (my main question really)? or are innate motor schemas at play? or does the T4 not require some aspects of T3?

    ReplyDelete
  42. I think that Harnad demonstrated an excellent point: that Searle concluded too much from the Chinese Room Argument. Swiftly, Harnad shows Searle’s Chinese Room Argument only considered T2, the Pen Pal Turing Machine, and NOT T3, which limits the sweep of the conclusions that Searle made against computation. While reading, this line stuck out to me: “But the TT is of course no guarantee; it does not yield anything like the Cartesian certainty we have about our own mental states.” Is there space for an empirical test that proves to be more assuring—or is the TT the best we’ll get?

    ReplyDelete
  43. Reupload:
    In this response to Searle, Professor Harnad explores some of the history of debate surrounding the CRA, and the fact that much of it has gotten bogged down by ancillary problems, or poorly worded responses that have occluded Searle’s key arguments - an argument that aligns with the beliefs of computationalists. This arguments that mental states are implementation-independent and also synonymous with computation states (if I am understanding correctly), come with the implication that someone who does not speak Chinese can introspect on their own understanding (or lack thereof) of the Chinese characters they are writing. Their mental state would presumably be the same as that of the computer implementing the same program. The key here, as I understand it, is that one aspect of this mental state is the lack of understanding that the non-Chinese speaker has, and that by recognizing this lack of understanding in themselves, they would know that the computer too does not understand.
    I am still unclear, however, as to how this meshes with the Systems Reply, that the agen (whether computer or person) is just one component of a larger system. We can interrogate the person, but cannot use this to gauge any sense of understanding at other levels (e.g, that of the system as a whole). Is understanding an emergent property of a system? If so, then deciding that the system doesn’t understand because one component doesn’t understand seems like a nonstarter.
    I feel like I am missing something here, so any thoughts / light people can shed would be greatly appreciated!

    ReplyDelete
  44. **Reposting as my comment was deleted**

    My main conclusion from Searle's CRA is that it effectively argues against the notion that cognition can be solely reduced to computation. However, it's worth noting that Searle goes further than what the CRA inherently supports, ultimately asserting that cognition has no computational aspects whatsoever. Searle's perspective, as articulated in the context of "strong AI," places emphasis on the Turing Test as a definitive criterion for assessing mental states. However, it's essential to recognize that passing the Turing Test is not intended to serve as the absolute threshold for attributing or denying mental states to a computer. Instead, it functions as an empirical tool for investigating the reverse engineering of cognition.

    ReplyDelete
  45. What I took from this is that searles biggest mistake was declaring that cognition is entirely not computation when he should have left room to allow that computation might be part of cognition but not all of it. The classic example many go to as an example for this is doing simple mathematics like pythagorean theorem or the quadratic formula. In this case what you might be doing is symbol manipulation, but these forums also require some understanding of what you're actually doing (in math courses they will often ask you to provide therefore statement to display that you understand what the number that you have arrived at means) long with this, I would rogue that a lot of people, when doing this simple math, have their own abstract thoughts which are evoked by doing this calculation that my also impact how they do the calculation (say they have an aversion to odd numbers)

    ReplyDelete
  46. The blog as deleted my original post, therefore I am re-posting my skywriting:

    In the Chinese Room Argument, Searle rejects the proposal that altogether, computation is cognition— he emphasizes the disadvantages to this Strong AI position. Professor Harnad’s “What's Wrong and Right About Searle's Chinese Room Argument” paper disagrees with the extremity of Searle’s claim, and provides an analysis for how computation could still at least be a PART of cognition. Given the great diversity of views in this field, I’m curious about whether or not there were any other research papers that critique Searle’s CRA— what are other varying perspectives that perhaps side with Searle’s conclusion, but come to this conclusion in a different way? As well, in terms of Professor Harnad’s position in this week’s reading, what would be examples of other research that agrees that cognition can still involve SOME forms of computation? Do they have different suggestions for the other non-computational components that contribute to cognition?

    ReplyDelete
    Replies
    1. Hi Michelle,
      While I was reading the paper, I was also wondering if there are any other research papers that support / discuss more on the argument “cognition can still involve some computation”, and I did find one research paper, which is “Information processing, computation, and cognition” (written by Gualtiero Piccinini and Andrea Scarantino). This paper doesn’t really support the view that cognition can involve some computation, but it does show what the opposing view think of the argument. The authors explain that a cognition theory must state which specific computations are involved in cognition. Hope this helps!

      Delete
  47. In the discussion, Professor Harnad talks about how he partly agrees with Searle's Chinese Room argument but also thinks Searle takes some of his ideas too far. I want to focus on something I don't agree with. Harnad says that things without consciousness, like toasters, don't have any form of mental state. But let's break this down: what does it mean to be 'unconscious'? For this discussion, I'll take it to mean 'not able to feel’.
    So, can something have a mental state even if it doesn't feel anything? I think the answer is yes!
    Imagine a simple bacteria. It has basic sensors to find food and avoid danger. Most people would agree that this bacteria doesn't 'feel' things, but it can still make decisions based on what's going on around it. We can even attempt, and perhaps succeed, to fully predict its behaviour based on its sensors and environment. If we look at a slightly larger multi-cellular creature, it probably also doesn't 'feel,' but it does gather and use information about its surroundings, which it may “store” within some specific cell for later retrieval. Isn't that a kind of mental state?
    As we go from simple bacteria to more complex beings like humans, there's a point where the creature starts to 'feel.' But it's clear that even before that point, the creature is gathering and using information about its environment. So, doesn't it make sense to say that it has some form of a mental state even if it doesn't feel anything?
    A possible counter-argument would be to effectively define mental states as feeling states, in which case saying mental states require consciousness is tautological.

    ReplyDelete
  48. I’m not really sure that I understand the concept of Searle’s periscope. If my mental state is fully computational, does this mean that I would be able to experience someone elses mental state if it were also completely computational? What does it mean that the “soft underbelly of computationalism” is the exception? Does the “Chinese Room” experiment fall under the category of Searle's Periscope since the person in the room is acting as a computational state since it is only responsible to act as a processor for syntax?

    ReplyDelete
  49. This was a skywriting posted on 15/09 that was deleted without my knowledge for some reason, but I will post this again regardless:

    Professor Harnard’s critique of Searle’s paper addresses my questions in my previous skywriting! it felt completely illogical for Searle to state that consciousness is predicated on conscious understanding, there are many forms of understanding that we cannot relay introspectively due to their unconscious nature!! The manipulations that Professor Harnard applies to the 3 propositions to the CRA helped me understand why I felt this way. Searle widely misrepresented the intended use of AI and i believe that this sort of introduced a bias in formulating the CRA and interpreting its results. He assumes that strong AI was made to operate at a level of which human cognitive processes do, but that is a large misrepresentation of the actual applications of AI: acting as a practical system for a specific task it is coded to execute, it was never intended to replicate human cognition. I believe the most important aspect of professor Harnard’s critique is Searle’s lack of exploration or engagement with of the concept of functionalism. I think it’s really difficult to negate the idea that our brains operate based on the concept of computationalism, it makes sense to me that, yes, there is specific rules and processes that can be follwed to produce an output, and i think that we can apply this in AI, but it is important to address that understanding may not just be a result of our qualia, but can also be a result of the interactions within the computational system. As Prof Harnad strongly highlighted, irrespective of the foundation, comprehension may be derived through the functional interactions and processes inside a system.

    ReplyDelete
  50. Harnad's dissection of Searle's Chinese Room Argument contemplates the dichotomy between 'understanding' and computational states. It's amusing yet baffling how Searle's experiment, designed to be a 'gotcha' against Strong AI, inadvertently nudged the field towards acknowledging the limitations of purely computational cognition. The argument draws a line in the sand, challenging the notion that computational states alone can encapsulate the richness of conscious understanding. If a system replicates the function of understanding without the 'feel', like an actor delivering lines without grasping their meaning, are we witnessing a form of cognition, or just an illusion of it? As we progress towards T3 and beyond, could it be that the 'feel', that Searle argues is missing, emerges from the complexity of interactions within a system, or is it forever confined to the organic realm of conscious beings?

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2023 Time : 8:30 am to 11:30 am Place :  Arts W-120  Instructor : Stevan Harnad Office : Zoo...