Monday, August 28, 2023

3a. Searle, John. R. (1980) Minds, brains, and programs

Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457 

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain I assume this is an empirical fact about the actual causal relations between mental processes and brains It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality The main argument of this paper is directed at establishing this claim The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. 





see also:

Click here --> SEARLE VIDEO
Note: Use Safari or Firefox to view; 
does not work on Chrome

122 comments:

  1. In his examination of “III. The brain simulator reply,” Searle criticizes the fact that this view “[simulates] the wrong things about the brain:” that the focus should not be on simulating synapses and neuronal firing patterns, but rather “what matters about the brain” - namely, its “ability to produce intentional states.” This emphasis on intention recurs throughout the text, with Searle repeatedly suggesting that intention is a requisite for understanding.
    I, however, fail to see this direct link between understanding and intentionality. Even in everyday life, there are some things that I understand without necessarily intending to; an example being automatic sentence comprehension. Though one might counter my view saying that, in this case, the intention may be occurring without my awareness, I argue back: can intention really occur subconsciously? Is not intention by definition something that occurs consciously?
    (Though this discussion brings us away from the discussion on if machines can think (Searle says yes,) if a computer’s ability to pass the Turing Test is sufficient proof that it can think (Searle says no,) and if instantiating a program is enough to produce understanding? (Searle says no, a program is not sufficient for intentionality and by extension understanding (which is the point I’m confused about.))...)

    ReplyDelete
    Replies
    1. In his text, Searle defines intentionality as capable of producing intentional phenomena like perception, action, understanding, learning. These he argues represent thinking/understanding, (note that these ‘phenomena’ do not require you to intentionally understand, as you put it), which are not produced by a symbol-manipulating machine, and exist only due to the biological, chemical and physical structure that our brain is made of. He does point out that if we build a machine with this biological structure (T4!), it would have intentionality, identical to humans, as he described. This is why he states that a program (manipulation of formal symbols) is not sufficient for intentionality and by extension understanding.

      Delete
    2. Searle also brings up intention when discussing how we are prone to put our own intention onto the tools that we use. He says, “our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them” (Page 5). For example, we might say that a motion-sensing light “knows when to turn on”, when in reality it does not know or understand anything. This does not answer your question of how understanding and intentionality are connected, but I find it interesting to understand why we would project intentionality onto machines. Perhaps Searle is doing the same thing in assigning that intention correlates to understanding in machines.

      Delete
    3. I think that Searle isn’t using the word “intentionality” in its regular sense, where it means something like “the ability to act according to a plan you set for yourself.” He’s using it in the philosophical sense, where it means to be able to link mental states with real-world things. In this case, the words “intentionality” and “understanding” would be synonyms. Both refer to the property that humans have and compter programs lack, which is to attach a semantic meaning to abstract symbols. When Searle is answering questions about a story in English, he’s acting intentionally because he is using words to refer to real things. However, when he’s providing answers in Chinese using the rules, he isn’t acting intentionally, because he’s just doing formal manipulation of what to him are abstract symbols. Computer programs don’t ground the symbols they manipulate in real-world phenomena, so they can’t be intentional - they can’t understand.
      Here’s the Stanford Encyclopedia of Philosophy entry for intentionality if you’d like to read more about it: https://plato.stanford.edu/entries/intentionality/

      Delete
    4. NOTE TO EVERYONE: Please read the other commentaries in the thread, and especially my replies, before posting yours, so that you don't just repeat the same thing.

      Delete
  2. This reading delves into the complexities of understanding and cognition, challenging the notion that mere symbol manipulation equates to understanding. The author, through thought experiments, questions the depth of artificial intelligence and its ability to truly comprehend language or context. He argues that even if a system can produce outputs identical to a human's, it doesn't necessarily understand the underlying meaning. Will AI ever be able to transcend formal algorithms to achieve genuine understanding or consciousness?

    ReplyDelete
    Replies
    1. From what I understood from the reading, Searle states that because intentionality is a biological phenomenon dependent on the biochemistry of its organism, computers lack this property. In fact, computers are unable to possess the notion of understanding since their operations are based on symbol manipulation, independent of meaning. He explained this point through his Chinese room scenario, in which he was capable of manipulating Chinese symbols to formulate responses indistinguishably from a Chinese person, without understanding a single word of Chinese. As such, although computers can manipulate symbols based on their shapes (computation), they do not truly “understand” and do not possess mental states in the way that humans can.

      Delete
    2. To me, it seems that if we wanted to answer “yes” to your question Marie-Elise, we would have to go even beyond a T5 robot/zombie. As Malika said, computers lack mental states (or to de-weasel it, don’t experience feelings as humans do), and thus, can’t reach what we call understanding. Furthermore, understanding relies on a sensorimotor experience with the world, and it’s not associated with computers unless we consider T3,4 or 5 robots. Nevertheless, to address the second part of your question regarding consciousness, if we define it as the reason for doing what we are currently doing, i would say that a robot’s consciousness wouldn’t exist unless it has the ability to understand, but the softwares are probably the closest thing to “consciousness” in computers.

      Delete
  3. I had a question from class that is not fully related to this article but I thought I would ask here anyway. Has there ever been an AI developed that has been limited in the amount of memory it has, to be more similar to a human? No human can possibly take in the “big gulp” so perhaps lessening the amount of data that it can keep at a time would be more helpful for reverse-engineering. I understand that this would not make sense to make as a tool or for monetization in the way of ChatGPT, but in terms of research it could make sense.

    ReplyDelete
    Replies
    1. According to "computationalism" ("Strong AI"), cognition (thinking, understanding, meaning, being able to do everything thinkers can do) is all just computation (symbol manipulation). IF you think ChatGPT passes T2 (and some don't think it does, because swallowing the "Big Gulp" is unrealistic for a brain) then it is the capacity to do THAT (rather than just memory capacity itself) that makes ChatGPTs way unrealistic. But you may right that giving too much rote memory may also be unrealistic and even counterproductive in reverse-engineering cognition. (See in Week 6a: Jorge Luis Borges Funes the Memorious)

      Delete
  4. Searle suggested, based on the strong AI hypothesis, that “ the mind is to the brain as the program is to the hardware” (page 8). I found it really interesting that he separated the different systems from one another. He establishes a distinction that we analyzed in class between hardware and software but if we push that hypothesis even further, we could deduce that the brain is just a program reader, thus it should be able to read different types of softwares/minds. If we think about the brain as a computer in that sense, it means that our entire personality is defined through our mind, but also our ability to think, which refutes the fact that thinking could be inherent to the brain’s structure. It seems to me that this vision of the mind is close to what we could call our soul (even though it might be a weasel word).

    ReplyDelete
    Replies
    1. From what I've understood, based on the Searle's points in pages 13 and 14, that this dualistic perspective where the mind is a software being ran by the hardware of brains is one held by proponents of strong AI. Searle establishes in page 14 that he believes that it is only by a complete replication of the brain, which is T4, that we can create a thinking machine. Based on that, it seems as though Searle is a materialist. Interestingly, he continues to reject the computationalist mentality and the analogy of the mind as a program, as he finishes the article with "whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality".

      Delete
    2. Adrien, Yes, "mind" like "soul" is a weasel word. The right way for Searle to put it (as you'll see in 3b as well as the PPTs) would be that according to Computationalism ("Strong AI"):

      (1) Cognition = Computation

      (2) Computation is Implementation-Independent

      (3) T2 (via computation) is the test of whether you have found the algorithm that produces cognition

      This would apply to whether it was a computer or the brain or Searle that was implementing the T2-passing algorithm.

      Now what does Searle refute, and how?

      Omar, Searle concludes rightly (from his CRA) that cognition cannot be ALL computation but he goes further and says (wrongly) that cognition cannot be computation at all. (Why and how is that wrong?)

      He also concludes (wrongly) that his CRA shows the Turing Test is not a sufficient test of cognition; but he really only shows that T2, passed by computation alone, is not a sufficient test. (If so, can he refute ChatGPT?)

      He also concludes that the only sufficient test would be T4. Is he right about that?

      Delete
    3. This is the point in Searle’s paper I found most striking. Originally, I had given credit to Strong AI, but immediately changed my opinion once Searle made his point about dualism. On a surface level, it seemed to me as though Strong AI was the apotheosis of physicalism, so it was quite eye opening to be shown that it indeed presupposes a strong dualist position. I am aware that Searle clarified that he is not saying that Strong AI presupposes Cartesian dualism, but it is evident that in espousing Strong AI, one must concede to the existence of (weasel-word incoming) soul per se, like Adrien described. Assuming this form of dualism to be false then, I believe Searle is correct in saying that whatever understanding and intentionality are, they must have evolved from the physical properties of the brain.

      Though if this is the case, due to the other minds problem, we can never really know if a different physical system other than our brain is capable of producing understanding and intentionality. So, to answer your question Dr. Harnad, I agree with Searle and believe T4 is the only sufficient test for determining cognition, since we can only confidently attribute cognition to machines that we have a strong tendency to believe are capable of cognizing, such as our brains.

      Delete
  5. I think this reading helped solidify the difference between weak and strong AI. Weak AI cannot think cognitively, while strong AI can, meaning it can behave like us humans do. However, from the example that Searle gave, from Schank’s program, the fact that the computer does not understand but manipulates the symbols to create an answer also suggests that strong AI does not exist yet, since it would be following an algorithm to answer, and not understand nor think.

    P.S. The idea that strong AI behaving like humans reminded me of computationalism, how we discussed in class, that computationalism stands for cognition being computation.

    ReplyDelete
    Replies
    1. From my understanding of our in class discussions, Professor Harnad mentioned that computationalism is also known as strong AI. I think that it is possible based on Searle’s description of strong AI that instead of there not being an example of strong AI yet, he rather says that the principles outlined by strong AI could not ever produce a thinking and understanding example of artificial intelligence that has mental states. It feels to me that Searle not only states that there isn’t a current example but also that there could never be one, following the ideas of strong AI.

      Delete
    2. Selin, Strong AI is the same as computationalism. What is that?

      And what do you mean by "Weak AI"? I said that it too was the same as something else, but what was that?

      Jenny, yes, Searle says computationalism is wrong: why?

      Delete
    3. Computationalism is the idea that cognition is basically computation, and I am not sure, but I think weak AI is that it can run a certain task based on algorithms. It's that computation can model or simulate anything (?)

      Delete
    4. Correct: The Strong C/T Thesis.

      Delete
    5. Searle says that computationalism is wrong because if cognition were only computation as computationalism suggests, then he should have been able to understand Chinese after performing his Chinese Room Experiment. Instead he was able to identify that it does entirely encapsulate cognition because he did not feel as though he understood Chinese. If computers were to act in the same way as we do under these constraints, then the computer wouldn’t understand what it was doing while responding, which means we have not produced an acceptable reverse engineering of the human mind. This brings up implications to the other minds problem. This is because we are unable to tell whether beings other than ourselves are able to feel. Searle goes on to take this idea to the extreme by saying that cognition is not computation at all.

      Delete
  6. Reading the brain simulator reply that emphasizes the simulating of the neuron firing at synapses, using parallel processes as the brain does, made me wonder if the symbol manipulation could be parallelly processed with the understanding (as in the attribution of meaning of the symbols in a way that makes sense for us). What if, when learning a language, we learnt the syntax so that we can speak correctly (which doesn’t require to understand the words, just to know their class). And because you experience life in a world where you see, you are able to name an object that you see in real life because you are told how it’s called. That way, the words you use when constructing a sentence have a meaning. In the case of a program, it could be that because the machine doesn’t exist in the real word, it isn’t able to give the symbols any meaning after the symbol manipulation, therefore isn’t able to understand. If we were to create a program that would teach it how to associate a symbol to a concrete real-life object, then it would be able to do the symbol manipulation and understand the story and the answers to the questions given.

    ReplyDelete
    Replies
    1. bout brain simulation, remember the simulated ice cube. What does that imply?

      The connection between internal symbols and the outside world (T3) is the core of the Symbol Grounding Problem (Week 5).

      Searle refutes computationalism. Can you use T3 to refute the "Brain Simulator" reply? (Try to avoid any sci-fi.)

      Delete
    2. A brain simulation is like an ice-cube simulation. It is just symbol-manipulations (Searle's "squiggles and squoggles"), interpretable (by us, users) as neurons and ice-cubes, firing and melting, but they are neither firing nor melting.

      If computationalism is wrong, then the brain is not computing (or not just computing), i.e., mannipulating symbols.

      See other replies about what a "machine" is. The CRA (Searle's Periscope) works for T2 is passed by a computer, but not for a T3 robot: Why not?

      What is a simulation?

      How does Searle refute T2/computation?

      Delete
    3. The simulated ice cube example implies that it is possible to create a symbol system whose properties and symbols can be interpreted as real properties of a real object. However, creating such system does not mean that this simulation is a real ice cube, merely that it can symbolically represent the properties of one. As previous readings have addressed, some non-human machines are able to simulate real objects, concepts, words, etc with a symbolic system, and then use this system to produce an output akin to a human. ChatGPT, for instance, swallowed the ‘big gulp’ and then presumably converted each word into symbolic or numerical notation, which it then uses to form structures based on statistical probability. The output may be akin, or in some cases identical, to how humans actually speak, but ChatGPT cannot be said to be understanding in the way a speaking human can. Searle, as I understand it, is refuting computationalism on the grounds that this ‘understanding’ component of human cognition is intrinsic, and that while computers may be able to replicate calculations performed by human mathematicians or things said by people using symbolic representations, they are not actually thinking. The brain simulation reply tries to make the argument that the brain could be simulated in precise detail in order to generate a cognizing non-human machine. However, just as a simulated ice cube does not equate to an actual ice cube, a simulated brain will be unable to cognize in the way an actual human brain can. This is seen in the fact that T3, a robot with the capacity to see, move, interact, and mimic human language is not doing so in the ways a human does. If I was taught a very precise series of steps to take to get across the McGill campus, and then was blindfolded and successfully did so, this would not be the same as walking across campus myself.

      Delete
    4. Thoughtful comments, but not quite on the mark yet:

      (1) To avoid the weaseliest of weasel-words in cogsci, it's safer to say symbolic encoding rather than symbolic "representation" (although in this case -- only -- it's an innocent use of the "r-word." No need for a homunculus with a "representation" in its head. The computer is just manipulating squiggles and squggles, and those squiggles and squoggles can be interpreted by US, the users of the computer, as "representing" an ice-cube. It's clear that it only represents an ice-cube to us; it doesn't represent anything at all to the computer that's executing the computation, any more than the words of a book represent anything to athe book They only represent something to US, who can understand language and read and understand the words.

      (2) If you simulate an ice-cube, the output is symbolic too: it doesn't really melt.

      (3) All the words in GPT's "Big Gulp" are already symbols; the data were just transformed into another format and then manipulated statistically to produce strings of words that are meaningful to US, but not to the GPT. As such, the words were more like data than computations; but the jury is still out on whether that all still counts as just computation.

      (4) The question is not about whether the computer is understanding "in the same way we do," but about whether it's understanding AT ALL: It's not.

      (5) What Searle means by "intrinsic" understanding concerns whether the computer itself understands; it doesn't. We are just projecting our own understanding onto it. That's what the CRA shows -- thanks to the Periscope.

      (6) A simulated brain is not a brain, it's just squiggles and squoggles interpretable (by outside users) as a brain. But since it's really just computation, the Periscope still allows the same conclusion: It's not T4; it's just a simulation of T4; so Searle can again use his periscope to execute that whole simulation, still not understanding the Chinese to which the brain simulation has been added.

      (7) And the very same thing can be said about a simulation of a T3 or T4 robot: That's neither a real robot nor a real brain. So if it is interpretable -- again, by US -- as a robot with a real brain, it's not; and if it's speaking Chinese, there's still no understanding there. Since it's all just squiggles and squoggles, Searle can still execute it all, without understanding a thing. (Do you see that now?)

      Delete
    5. Thank you for your reply to my comment, it clarified a few questions and confusions I had. A follow-up question your comment raised for me is whether you think that building an actual imitation brain in a T4 robot could potentially produce the non-computation stuff (internal states, understanding) incidentally? Essentially, my question is whether it is possible that understanding is a side effect of computation when that computation is carried out in the exact manner a brain carries it out -- aka using the same neuronal network -- rather than just carrying out a procedure according to rules which produce the same output.

      Delete
  7. Firstly, Turing thinks machines can think if they follow specific criteria. Searle is contradicting this argument by saying that machines cannot think because strong AI is just programs, and these programs are not sufficient for thinking. That is, AI type machines, not machines with internal causal powers equivalent to the brain. Searle says that only special kinds of machines like brains or machines with internal causal powers equivalent to those of brains, like octopi for example, can think. I interpret this as meaning that the programs in strong AI still lack a causal, connectedness to our world which cannot be fabricated. It seems like the ‘programs’ in our heads are just extra tools we use for what people call ‘information-processing’, but they can’t explain cognition on their own.

    ReplyDelete
    Replies
    1. Brains and computers are both machines, i.e., causal systems with certain causal "powers" (i.e., certain things they can DO. Cogsci is trying to reverse-engineer how and why any causal system -- whether a human brain (and body) or just a computer, or many computers that have taken the "Big Gulp" of ChatGPT -- has the causal "power" to pass T2, or T3, or T4. But what is Searle's CRA?

      Delete
  8. I really enjoyed Searle’s arguments against the possibility of strong AI being able to explain machine intentionality. He uses real-world examples to defend his argument, whereas I found Turing was more hypothetical and not as convincing.

    Searle believes that intentional machines require input, program, output and understanding. Strong AI, which attempts to explain thinking by computationalism, misses the very important part of intentionality, which is understanding or semantics. He doesn’t argue that human mental states that create understanding couldn’t be implanted into computers in theory, but this does support his argument that human cognition cannot be all computation. Searle emphasizes that simulation cannot be duplication, so no matter how far Strong AI simulates the behaviours produced by humans, it is not duplication because the computer cannot understand the meaning of its symbol manipulation.

    One argument that I found was kind of lacking by Searle, was the ‘other mind’s problem’. He says that if we are going to study cognitive science we have to assume there is intentionality the same way a physical scientist has to assume that physical objects exist. I don’t disagree with Searle’s point but I found it lacked support. Seeing as the other mind’s problem comes up a lot in cognitive science, I think this argument required more exploration.

    ReplyDelete
    Replies
    1. Please try to derive what you are trying to conclude here from what the CRA is, and what it can and cannot show.

      Turing proposes a means (computation) and a way to test whether you succeeded (whether you use only computation, or you use more. Searle shows that computation-only plus T2 are not enough. But what, if anything, does his CRA show and not show about T3, and about noncomputational means.

      Delete
    2. T3 is immune to the CRA and Searle's Periscope: Why?

      Delete
    3. REPOST Blogger Deleted Original
      Kaitlin Jewer October 23, 2023 at 2:44 PM
      Searle's CRA shows that cognition cannot be all computation because computation
      does not understand meaning but we do.
      The T3 is immune to the CRA and Searle's Periscope because the Turing Test is not
      designed to solve the hard problem in cognition (how and why we can feel). We can't
      know whether Anais (T3) is truly feeling meaning because this leads to the OMP.

      Delete
  9. According to the 3a reading, there seems to be a big gap, between the typical machine or computer and a real AI, called intentionality. It is interesting to relate the concept of mind to intentionality, but I need some clarification on whether the mind and intentionality are identical, or even totally the same thing for Searle. Or, in his words, is intentionality the product of the mind?

    Also, it seems like, for Searle, T4 is quite crucial, or the bottom line for a computer to be considered an AI, as owning the sensorimotor function means that the machine could find the inputs by itself.

    ReplyDelete
    Replies
    1. Please read my other replies about "intentionality" and "mind" (both weasel-words).

      Yes Searle bets all his money on T4, but he gives no clue how to reverse-engineer it.

      And in the TT hierarchy, just as T3 capacity has to include T2 capacity, T4 function has to include T3 capacity -- all of them Turing-Indistinguishable lifelong.

      Delete
  10. In "Minds, Brains and Programs" Searle acknowledges that the human brain, capable of intentionality and understanding, is also a machine. He argues that what gives the human brain the ability for intentionality, and what separates the human brain from Schnak’s machines, is the capability for “perception, action, understanding, learning, and other intentional phenomena,” which arise out of the causal powers of the brain’s biological structure (10). This, Searle argues, is why a “purely formal model” will never be capable of intentionality, as these models are missing the structure to produce these phenomena (11). Searle goes on to argue that if we were able to produce a T4 (a machine with the same biological structures of human brains) this machine would be capable of intentionality and understanding, as “if you can exactly duplicate the causes, you can duplicate the effects” (11). Does Searle then argue that the only way to understand cognition/thinking is by replicating the human mind? What are these biological structures that are necessary to produce intentional states? Using Searle’s argument, what is the function of these machines, if they cannot tell us anything about human cognition without possessing the same biological structure?

    ReplyDelete
    Replies
    1. Good points. In criticizing Turing, who provides candidate mechanisms (computation) as well as methods to test the (Tt), Searle just says go study the brain.

      But first, he does provide an argument (the CRA) against computation and T2: what is the argument, and what does it show, how?

      Delete
    2. Searle’s argument against computation and T2 is that even if a machine could pass a T2 TT, and perform indistinguishably from a human or thinker, that does not solve the question of “does a machine think?” (or in Searle’s reformulation, “does a machine understand”). Searle does this by positing a situation where, rather than a machine answering questions via email (as in a T2), Searle himself responds to questions in Chinese. The aim of this thought experiment is to show that by following a set of rules Searle can respond in a way that is indistinguishable from a speaker of this language, even though he does not understand the input/output. Through the CRA Searle argues that the T2 TT is not a definitive test of cognition/understanding, and that cognition cannot be solely computation.

      Delete
    3. Spot-on. Just one important detail: T2 is not just not a definitive test (if passed by computation alone), but the candidate proves computation fails to understand.

      It is interesting, today, that Searle's response to the "System Reply" (which was that "You, Searle, would not be understanding in the Chinese room because you would only be a part of "the System", but the System would be understanding") was to memorize the T2-passing algorithm and execute it in his head: Then Searle himself would be the whole "System," and would still not be understanding Chinese.

      Well, it's interesting to ask this question today about ChatGPT: If ChatGPT does pass the Chinese T2, perhaps it's still reasonable that Searle could memorize and execute the algorithm without understanding Chinese; but what about the 2021 "Big Gulp" of words? No human could swallow that. So, is the "Big Gulp" cheating, because it's not just computation; it's data? That would be the same sort of thing as using a data link to Wikipedia or Google or the entire Internet. That's no longer reverse-engineering the cognitive capacity of one autonomous TT candidate, any more than adding a link to a committee of Chinese Nobel prize-winers would be...

      Delete
  11. Searle’s main arguments seems to be that T2 in of itself is not a good litmus test of whether we have reverse-engineered cognition. The CRA poses that even a human with no understanding of Chinese can pass T2, but we would not say that this human is thinking or acting in Chinese. Although through computation alone, a machine (or human) can pass T2, it isn’t enough to say they have any understanding of the task they are passing. They are simply generating ouputs based on rules accessible to them and the inputs given, there is no need for a deep understanding.

    Where I disagree with Searle is in his “Robot Reply” refutation. Although I may be misunderstanding his argument, I believe he grossly mischaracterizes perceptual and motor capacities. A robot with the ability to not only compute like a T2, but also interact with and perceive the real world is nowhere near the same as an automatic door. Giving a robot sensorimotor capabilities (which is what the yale reply seems to be) is giving the robot the ability to pass T3. It would need to be able to perceive and interact with the world, and to link the computations to real world objects through representations. Searle seems to be most content with T4, but I fail to see how the capabilities of T3 aren’t enough.

    ReplyDelete
    Replies
    1. You are basically right, but T3 is not just T2 plus I/O add-ons: a computer with a camera and wheels. Sensorimotor function, both in organisms and in cameras and wheels, is not computation any more than an ice-cube is.

      (Both Yale and Searle thought of sensorimotor function as if they were a computer's peripheral devices and the computer was the only thing doing the work. See 3b.)

      Delete
  12. Searle states that cognition and computation are completely different, because only machines can think and that computation “is about programs, and programs are not machines.” (14) To me, cognition for humans is similar to computation for computers, and cognition is the program for the human brain. I wonder, if we follow this thought, do we arrive at a T3 model? If we program a mechanical brain of a computer to do thinking (on the same level as us), is that a T3 candidate? So is Searle just skipping T3, jumping from T2 directly to T4? Since his CRA is mostly about T2 (I personally think he passed T2 by making everyone believe he understands Chinese), and that he proposes to go for T4.

    ReplyDelete
    Replies
    1. I think to achieve T3 is the prerequisite to achieve T4, so if Searle skip T3, his idea targets to against the theory of the TT hierarchy. However, according to this article, what I got is that Searle does not have that kind of purposes and point out the incompletes of Turing's statement.

      Delete
    2. Tina, I am not sure you understood or used the CRA. You seem to still be talking about what you think thinking is, and you still think it's computation. The CRA refutes that. How? (See 3b.)

      Evelyn, Searle doesn't realize that the CRA does not work against T3, nor why. (Why?) You're right that T3 includes T2 and T4 includes T3.

      Delete
  13. I found that something in common between the theories presented is the belief that machines (computer programs) could be designed to be able to reason and understand the meaning of information as a human could. In the section entitled ‘The Robot Reply’ it is argued by those who created the reply that by putting a computer program inside a robot and giving this robot human-like abilities such as seeing through a digital camera, or legs and arms, the robot would reach a genuine understanding of stories such as the one used as an example in this reading, as well as other mental states. In other words, the robot would have a human mind or at least it could be considered that the way in which it arrives at the understanding of stories is through a process similar or equal to the one that human minds go through to understand information. However, this reply doesn’t really seem to propose an explanation as to how the computer program is recreating or would recreate what happens in our brains (our mental states or processes) when we think.

    ReplyDelete
    Replies
    1. 1. What is a machine? (See other replies in the skywritings.)

      2. What do you mean by "the meaning of information"? What is "information."

      3. "Mental" and "mind" are weasel-words: You mean FELT states. What role did feeling play in Searle's argument? (Searle used it in the CRA, but didn't notice of note that he did.)

      4. Neither Searle (nor anyone) has so far successfully reverse-engineered either T2 or T3. The CRA is based on supposing that T2 could be passed by computation alone (i.e., computationalism, "Strong AI") and showing (with his "Periscope") that that would NOT produce understanding. What does the CRA show? And could the CRA apply to T3 too?

      Delete
  14. For some reason, I thought Searle's refutation of computational was very refreshing: How could a machine that's not made of the same stuff that a brain is made of (ie: synapses, neurotransmitters, complex connectivity traces...) possibly be comparable to the cognizing that us cognizers do? The professor said that intentionality is a weasel word further up in the comments (it's true, in the way Searle meant it, you can use it interchangeably with 'soul', 'consciousness', both other fancy sounding weasel words), but I think that simply "awareness" would work. The chinese room paradigm entails the symbol manipulator to be completely isolated in the room, thus unaware of the rest of the system. The same way the machine doesn't know what it's working for (it was programmed, that's all it "knows"), the man in the room is also blind to the interpretation of the symbols he sends out. A thought that is far from being fully fleshed out: Searle identifies symbol manipulation as a discrete "machine" within the machine, and talks about a "subsystem" that could correspond to this in the human mind. But in the human mind, every process, thought, feeling can so easily be connected to others, categorized and qualified in one way or another, that this categorization becomes mere human convention. And this is what I think Searle's point was, that you can't equate the mind, infinitely complex, to something as simple as a machine, with functions like boxes. Can anyone agree or disagree with this?

    ReplyDelete
    Replies
    1. 1. The CRA is not based on "stuff": what is it based on?

      2. "Awareness" is just as much of a weasel-word as the others. Try FEELING instead: How is that related to understanding?

      3. See other replies in these skywritings to the "System Reply": getting rid of the walls and the room.

      4. Your last 2 sentences suggest you have not understood the CRA yet. That's not the CRA. See 3b

      Delete
  15. I really liked this reading, the analogies were very relevant and it brought up the point of understanding versus input-output computations. The problem with this question is that understanding still remains pretty vague almost weasel-y. We, humans, understand. I get the author’s point that digital computers do not exactly understand. However, we do not know exactly how we understand. The phrases “it adds nothing”, “no intentionality” do not really explain the gap. The text also touches on the notion of duality – a heavily criticized theory. I must join the author on that point that if dualism needs to be in order to prove that machines can think than strong AI has no chance.

    ReplyDelete
    Replies
    1. I agree that "understand" in this reading remains weasel-y. From the text, I gained insight about what understanding is not(understanding is not just symbol manipulation as demonstrated in the chinese room example), but it wasn't clear what Searle thinks is actually going on in the mind when understanding is occurring. To his credit, I don't think anyone really does know(if there was this course would be a lot shorter, and more boring!). Searle tried to explain that an important difference that sets apart understanding is "intentionality", that humans have and machines(digital ones) do not. "Intentionality" as Prof Harnad explained above, is itself a bit weasel-y. Searle ties intentionality into phenomena, such as perception and learning and later brings up feelings we attribute to animate beings(love, pain). The argument he's making seems to be, machines(digital ones) can't understand because they can't feel, and that to create this "feeling" we would have to duplicate the human brain.

      While I certainly associate understanding with feeling, I don't know if the human brain specifically is required for this. Certainly the human brain is successful at creating feeling, but why does that have to mean that only the materials that makes up the human brain are capable of producing that result. Searle did address this, with an analogy of photosynthesis, and I appreciated that, but wished this thought had been explored further.

      Delete
    2. Garance, you're right that "understanding", like "intentionality", would be a weasel-word if we could not define it clearly. Here's a stab at it (and the CRA depends on it): T3-grounding plus what it FEELS LIKE to understand. More on this in Weeks 5 and 7.

      Josie, "mind" is a weasel-word too. Replace it by "what it FEELS LIKE to understand (e.g., Chinese)". That's what the CRA (and Searle's Periscope) show to be missing in T2 if it is passed by computation alone. Explain.

      You're right that when Searle points to the brain as the way to go, instead of computation and Turing-Testing, he is saying nothing at all other than that computationalism ("Strong AI") is wrong: cognition is not just computation.

      Duplication (cloning) is not explanation either.

      You can replace "intentionality" and "mind" by FEELING, but that's just clarification, not causal explanation (reverse-engineering). And feeling is still the Hard problem, whereas Turing's method can only address the Easy Problem.

      But your comments are thoughtful. And, yes, once cogsci succeeds in reverse engineering cognitive capacity, it will have explained cognition as well as plant science explained photosynthesis (though there may be other ways to produce cognition). But that will not solve the Hard Problem...

      Delete
  16. Searle shows that computationalism is wrong because the mere computational process is unfelt and therefore cannot alone be cognition. This considered, Searle thinks the only way to solve the easy problem is through at least T4. However, I think Searle overlooks T3 since he does not show how that cognition doesn't involve any computation. He only suggests that cognition cannot be reduced to mere computation. Therefore, while T2 cannot help solve the easy problem, I am not convinced T3 should have been dismissed.

    ReplyDelete
    Replies
    1. Correct. But this is not about "reduction." Searle does not show that the mechanism that produces cognition does not include some computation.

      Delete
  17. I don’t really understand why Searle has to argue against this in the first place. It would make sense that a machine can not understand what we understand and is only able to produce outputs because of algorithms. Isn’t the whole point of consciousness that you need a mind? And a machine only has a brain. How can something that is programmed have feelings? The brain structure linked to emotions is mainly the amygdala so would it be possible to reproduce its functions in a machine or a computer? How can you replicate the exact functions of the amygdala. The amygdala is not programmed. It is biological. Searle mentions that “love and pain are neither harder nor easier than cognition or anything else” and I completely agree that if a computer isn’t able to feel anything, how could you expect it to truly understand. Because to truly understand something, you have to feel something.

    ReplyDelete
    Replies
    1. Marine, you're partly right and partly wrong. It's not clear whether you need to reverse-engineer the amygdala (T4) to reverse-engineer cognition (unless the amygdala is the only way to produce feeling). But you are right that it FEELS LIKE SOMETHING to understand. So you may or may not need to reverse-engineer the amygdala to pass T3 (and solve the Easy Problem), but you may need to reverse-engineer the amygdala (T4) to produce feeling. However, because of the Other-Minds Problem, you would not know WHETHER you had produced feeling by reverse-engineering the amygdala with T4; and because of the Hard Problem you would still not have explained HOW or WHY the amygdala produces feeling.

      Jessica, see my reply to Marine.

      Delete
  18. The discussion between strong AI and weak AI was interesting to me as I have never read such a concise and easy to read paper distinguishing the two from one another. I have understood that weak AI is an idea that when needing to explain most things, computers are helpful, whereas strong AI assumes that cognition is computation. After reading professor’s replies to posts, it was also clarified that Strong AI is computationalism.I have also gathered that he believes that intentionality and understanding are the same thing and is what ignites our ability to understand symbols; therefore, Searle argues that cognition is not JUST computation since AI can’t replicate the mechanisms of the brain’s understanding. When discussing reverse engineering to conclude something as T3 or T4, it reminded me of our discussion in class about Descartes and not knowing if anyone else is zombies because we know that we feel but we don’t do an action to show that, so how can anyone be sure that we are feeling? How would one reverse engineer into a non-human machine what we cannot decipher in humans? It seems like a few steps are missing and we must investigate the human brain and feelings first.

    ReplyDelete
    Replies
    1. Weak AI is also the Strong C/T Thesis.

      It feels like something to understand language.

      Searle wrongly concludes that cognition is not computation AT ALL, but the only thing that can be concluded from the CRA is that cognition is not ALL computation. However, that's enough to refute computationalism and T2 if passed by computation alone. It says nothing about T3, though.

      We DO do things that correlate with what we are feeling, but that does not solve the Other-Minds problem, because that's still just doings, not feelings.

      Yes, both the "Other-Minds Problem and the "Hard Problem" are problems for CogSci. Explain the difference.

      Delete
    2. I'm not sure if we are allowed to reply to questions in other people's threads, but here's my stab at defining the "Other-Minds Problem" and the "Hard Problem".

      The "Other-Minds Problem" has to do with the fact that we can't know with certainty whether others are thinking as we do. In other words, there is no way for us to determine whether someone (or something) who behaves exactly like a human being is thinking like a human being.

      The Hard Problem has to do with our inability to explain WHY and HOW human beings feel. Not to be confused with the Easy Problem which would be solved when we figure out how and why can we do the stuff we do.

      Delete
    3. From my understanding, the "Other-Minds" problem in Cogsci focuses on the question how do we know if other beings have thoughts and feelings similar to our own whereas the "hard problem" of Cogsci is focusing on why/how cognitive process are associated with subjective feelings.

      Delete
    4. Aashiha, yes, that's the O-MP, and it applies to any thing in the world -- living and non-living: the only thing you can know is thinking/feeling is yourself (because it feels like something to think).

      Delaney, the OM-P is not about whether others have thoughts/feelings "similar to our own" but about whether they have thoughts/feelings at all. The HP is the problem of explaining how/why any thing FEELS at all, rather than just DOING (i.e., the EP).

      Delete
  19. I thought this paper was an interesting, if not a little frustrating, exercise in defining what is needed to confidently ascribe understanding. Searle’s argument supposes that an individual within a room can act as an input-output device which receives Chinese characters and returns other characters in such a way that someone outside of the room would believe he understood Chinese. This is meant to be an argument against Strong AI, or the idea that a computer system built to use language may ultimately understand that laguage.
    I struggle with Searle’s view particularly as it relates to the nature of biological systems. Searle states that ‘ only something that has the same causal powers as brains can have intentionality’. This assertion seems to be a rather arbutrary way of excluding non-biological systems from having understanding without fully explaining why this is justified. Especially given the point he makes that a single semantic meaning can be represented in many different ways depending on the language being spoken, shouldn’t this idea that meaning can take many forms (and so can the understanding required to produce those meanings) permit space for a program with intentionality?
    In the CRA, we assume that the learning that is true of a biological system will be true of a computer system (by conflating the learning the man in the Chinese room can do with the learning that a computer / program can do. This seems to me to be in conflict with Searle’s assertion that no computer can be intentional because it is not made of the same matter as our brains. I imagine I’m missing something here that will reconcile these ideas - can anyone shed some light?
    Thanks!

    ReplyDelete
    Replies
    1. The CRA shows that computation alone cannot explain cognition (which includes language understanding). But it says nothing about what else is needed, and certainly not that studying the brain is the only way to find out (Week 4). (This has nothing to do with the multiplicity of languages or the many ways to say the same thing in a single language.

      Explain what the CRA shows and does not show, (See the other replies in this thread and the 3b thread.)

      "Intentionality" is just a weasel-word for feeling (the "Hard Problem.")

      Delete
  20. I really enjoyed this reading, as it explained some frustrations (and points of confusion) I’ve had with AI for a while. Specifically at the end when Searle addresses devotees of dualism, who hope that the brain is a “digital computer,” pointing out that early computers were similarly called “electronic brains.” This line of reasoning has continued to frustrate me, since many of my other cognitive science courses have described the long history of the “mind as a X,” with whatever technology was new at the time subbing in as “X.” I know this is the nature of science, that paradigms shift as we learn more, but often the efforts to completely characterize or portray cognition as a digital process just seem futile. How do we know the “mind as a computer” model is more apt than the “mind as a pump of humors” model from centuries ago?
    Other than somewhat validating my feelings on the mind as a digital computer idea, I also thought Searle’s explanation of strong and weak AI was very helpful in understanding the different approaches I’ve seen AI be used for in various other contexts.

    ReplyDelete
    Replies
    1. Reverse-engineering and Turing-testing is the way to find out whether your model actually works: whether it is really a causal explanation of how and why you can DO what you can do.

      Delete
  21. Ultimately, I like the weaker part of Searle’s conclusion—that thinking can’t be the result of pure computationalism (though he goes a bit far suggesting that “No reason whatsoever has been offered to suppose that such [formal] principles are necessary or even contributory [for understanding]”(4)).

    Still, I’m having a hard time conceptualizing the CRA. The idea that any amount of given rules would let someone respond to a complex story at a native level of understanding, without understanding the story or the response feels impossible. What would the rules even look like? Is there even a set of rules that could allow someone to respond natively to a language they don’t understand without being at all grounded in meaning? Obviously these rules couldn’t be as simple as “’squiggle squiggle’ is followed by ‘squoggle squoggle’” (6). My intuition is still that Searle is right about this, and with a little begging the question, this seems to be essentially what LLMs like GPT are doing (or so I think), but the idea that by following a set of rules one could produce a reply that seemingly understands the story of the hamburger has been impenetrable to me this whole reading.

    ReplyDelete
    Replies
    1. Try ChatGPT on Schank's hamburger story (but make it a vegeburger!).

      (But "Stevan Says" ChatGPT may be cheating by doing it with its "Big Gulp" of data rather than just an algorithm. If so, then T2 has not yet been passed by computation alone. The CRA, though, is a conditional -- or even counterfactual-conditional -- argument: "Even if T2 could be passed by computation alone, it would not produce understanding."

      Delete
  22. Perhaps I am amongst the many who misunderstand Searle’s notion of understanding, but my problem with Searle’s Chinese room argument lies in the fact that he leaves so much room for misinterpretation, that he then has to spend most of his time refuting critiques. From my comprehension, Searle conceptualizes “understanding” as synonoumous to a causal explanation with intentionality (although I now see that intentionality is just a weasel word, better replaced with feeling).

    This is my problem with such a definition of understanding: When we, or any machines that are capable of doing so, LEARN something (i.e, a language, algebra, etc.) are we not deriving “meaning” from symbols that are objectively arbitrary (“Squiggles and Squoggles”)? If we break anything that has any meaning to us down to its constituent atomic building blocks (even Searle’s beloved “causal” biochemistry), do we not find symbols just as arbitrary as the 0s and 1s that Searle discounts as having no inherent intention? Perhaps our “intentionality”, or rather FEELING (replacement of the weasel word) is simply a result of the expansive neural connectivity of the brain (which we have yet to crack the code for), but the atomic units themselves don’t actually mean anything.

    I do agree with Searle that computationalism is not sufficient in explaining cognition (i.e, cognition isn’t all computation) as computation, in isolation, does not lead to understanding. However, if we combine this “meaningless symbol manipulation” with some baseline “representation” of what the symbol corresponds to, such that it can be realized through the senses (i.e, an apple is an apple because we know what it looks, tastes, feels and smells like) or through logical inference and higher level abstraction of these basic representations, THEN can we explain cognition? (I know someone else touched on this point, but I wanted to add that once we have these baseline representations, perhaps we categorize them and interpret them through some computational process that allows us to cognize).

    ReplyDelete
    Replies
    1. In Cogsci, the weaseliest of the weasel-words is "representation," which is hopelessly homuncular.

      Computation is computation; sensorimotor function is sensorimotor function. Thinking, and meaning, and understanding are things thinkers DO plus whatever it FEELS LIKE to do, and to be able to do them (if it feels like anything at all -- otherwise there is no thinker, just a zombie).

      Reverse-engineering and Turing-Testing can only solve the Easy Problem of explaining the causal mechanism underlying the capacity of thinkers to DO what they can DO. It cannot solve the Hard Problem of explain how and why thinkers can feel (if they are not Zombies).

      You would be right to say that it sounds as if Turing's Cogsci is just Zombie Cogsci. But Turing is right in replying that you can't do any better than that, even with T4. (In other words, Cognitive Neuroscience is Zombie Cogsci too.) That's part of the topic of Week 4.

      Delete
  23. Searle argues that to believe in strong AI, we need to subscribe to a strong form of dualism, where the mind is separate from the body. In strong AI, the computer is presumed to be not only a tool to study the mind, but also a mind itself that can have cognitive states if programmed correctly. Therefore, strong AI assumes that the mind can exist and function outside of the body. By that logic, would it be theoretically possible to transfer all the contents stored in a human brain into digital form, and function inside a robot?

    ReplyDelete
    Replies
    1. Philosophers! Always accusing mortals of being dualists just because they can't solve the Hard Problem (or venture to). (Fodor will be doing the same thing, but this time against the brain rather than the computer!) Guess what, "dualism" is a weasel-word, and so too are all philosophers metaphysical positions on the Hard Problem: monism, materialism, functionalism, identity theory, epiphenomenalism, etc. etc. All empty egg-shells.

      All talk about computer teleportation, like all talk about brains in vats, is armchair sci-fi, not cog-sci.

      Delete
    2. My skywriting was deleted for some reason. This was my reply to Anais:
      Hi Anais, I believe Searle argues against strong AI and computationalism and, thus, dualism is not relevant for the present argument. What does the TT tell us about HOW we humans cognize? Searle is arguing that simply programming a computer (T2) is not enough to explain human cognition (e.g., CRA argument: we still don’t know if the machine understands Chinese). Thus, cognition is not solely computation (i.e. symbol manipulation). Rather, Searle suggests cognition might actually be dependent on sensorimotor processing in order to connect internal symbols with the outside world (T3). He argues that feeling/thinking is a biological phenomenon. Thus, coming back to your original question, even if strong AI were able to transfer a human’s memories/thoughts into a computer, according to Searle, the computer would only be able to read the computations, but we cannot know if it feels/understands/thinks for itself. We would potentially lose the human soul…

      Delete
  24. Good reflections.

    And, yes, Searle has completely confused the Strong C/T thesis with computationalism.

    Searle does refute computationalism.

    And he does use a piece of the Hard Problem in order to refute it, but negatively:

    He uses FEELING to show that computationalism cannot produce feeling (in the special case of T2 passed by a computer alone), because (using his Periscope) he himself, Searle, would not be understanding Chinese if he executed exactly the same T2-passing computations the computer was executing. And he can KNOW he is not understanding Chinese, regardless of how others interpret his Chinese input and output, because he himself is not FEELing he is understanding Chinese (so neither is the T2 computer. And feeling you understand a language is part of understanding a language.

    And that's a matter of Cartesian (Descartes) certainty. (The rest of understanding is T3-grounding.)

    But apart from that valid (and obvious) refutation of computationalism ("Strong AI"), Searle is just flailing away emptily, redirecting people to "studying the brain" "instead of computers" because he thinks that his argument proves that's the only way. (And besides, "everything is a computer".

    Last, Searle also insists on finding a solution to the Hard Problem (of explaining how and why we feel, which even Cognitive Neuroscience (T4) cannot do). So the call to explain the "causal power" of the brain when it comes to producing FEELING rather than just DOING, splutters out, yet another victim of the Hard Problem.

    [By the way, of course the brain produces feeling (it's "got the ['causal'] power"). But the Hard Problem is to explain HOW and WHY. Without that causal explanation, FEELING just dangles, as an apparently superfluous correlate of DOING power (Week 10) -- one that has even made some air-headed metaphysicians imagine that "epiphenomenalism" is an explanation, rather than just yet another restatement of the Hard Problem....]

    ReplyDelete
  25. I believe Searle included the concept of strong equivalence in the argument for T4, a machine with the same biological structure as human brains is potentially capable of intentionality and understanding, it will, according to his statement, have the same causal powers as brains and thus have intentionality. However, I believe this strong equivalence between T4 and actual human mind is incomplete. There is still something that, as others have agreed, Searle provides no indication of what is the missing from this strong equivalence that makes it incomplete, such as how to reverse-engineer.

    ReplyDelete
  26. In “Minds, brains, and programs”, Searle argues that being able to pass T2 is not enough for understanding Chinese in the CRA, since understanding (and cognition) cannot be a computational process. While I was reading Searle’s arguments, I really didn’t understand what intentionality meant, but it felt like it referred to a very similar thing as “understanding” (If anyone can enlighten me on this that would be great!). Therefore, when he mentions “whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.”, does he imply that understanding should always have a biological basis? Then, one can say that Searle is in favor of T4 (physical indistinguishability, building the causal structure of the nervous system) for explaining “understanding”. Since biological phenomena is observable, then would “understanding” always be observable?
    Since understanding involves “what it feels like to understand”, I can’t connect the ”feels-like” quality to observable biological processes.

    ReplyDelete
  27. I found the article transparent until the last paragraph when he said “In defense of this dualism the hope is often expressed that the brain is a digital computer (early computers, by the way, were often called "electronic brains"). But that is no help. Of course, the brain is a digital computer. Since everything is a digital computer, brains are too.” I guess that he takes a broad view of what a digital computer is, defining it as a machine that manipulates discrete signals rather than digital computers as they exist in their actual binary form.
    I recall from our class discussion that both in computation and in the brain, processes occurred discretely because we can make discrete differences as small as we like. But I can't understand why this is the case because the physical world is continuous and so are the neurological processes in the brain. While I can conceive how the brain might process discrete signals, I'm not convinced that this inherently makes it a digital machine.

    ReplyDelete
  28. I found the Robot Reply preposterous when I read it as I felt like they skirted around the issue that was being addressed by Searle (namely that something that passes the T2 TT does not necessarily have intentionality* as humans do). The idea is that intentionality* cannot arise from a formal program; adding sensorimotor abilities to the robot does not allow us to refute that (as demonstrated by his refutation to this reply). I do agree that computation alone cannot explain how people think.

    *a weasel word

    ReplyDelete
  29. Schank's machine would pass T2, because it can answer questions about stories in the same way humans do. I agree with Searle that being able to answer these questions does not mean a T2 machine is capable of thinking in the exact same way humans do, but he isn't specific about what this machine is missing besides "understanding" to reverse engineer human cognition. Both a human and a machine are capable of filling in gaps of information based on clues in a story, but what he seems to be getting at with "understanding" is the subjective feeling of knowing something like a language in the CRA. I think that computation explains some of cognition (possibly most of it) but the hard problem of knowing how we FEEL is more relevant to what Searle is arguing is missing from strong AI.

    ReplyDelete
  30. Searle brings up the necessity of the causal capabilities that a brain has that is unlike other things out in the world that some have been colloquially and trivially anthropomorphized (i.e the automatic door KNOWing when someone is about to walk through and opening). I appreciate his explanation of programs lacking “intentionality” (boiling down to FEELING) because of the (my words here:) ‘causal kickstart’ that someone has imposed on the program. What I’m getting at is an element of the CRA as well, though is not written about in the paper and merely part my musings: Searle, in the room, transcribing Chinese is merely following a set of predefined rules, the “intentionality” is not coming from him but whomever created the rules, just as the “intentionality” of the automatic door does not come baked in from some innate causal-what-have-you in the circuitry of the automatic door but instead from the designer and engineers of the door (my understanding of Searle’s use of intentionality, weaselly as it is). Of course to have a useful theory of cognition we need to be able to explain it, so I understand, for example, computationalism and IF it were true it would be so nice to be able to boil down cognition to purely computation as we can wrap our heads (and computers) around that, but of course can’t end it there. Then enter ChatGPT (or some program who’s similarly taken the big gulp); tracing the “intentionality” back to the original programmers gets more difficult, the program gets so complex that it’s unclear whether it has causal capabilities (ChatGPT in it’s current state does not as its been limited by OpenAI but its easy to imagine a world where those guardrails have been removed and it’s been equipped with some way to causally and directly affect the world [sensorimotor capabilities per say]). All I mean to say is that Searle’s explanation against ascribing feeling capabilities to digital machines (”intentionality”) is not enough for me now that ChatGPT and things like it exist.

    ReplyDelete
  31. In "Minds, Brains, and Programs," Searle asserts that 'understanding' implicates subjectivity (experience) and intentionality (having mental states that can be directed towards concepts and/or objects). Therefore, mere symbol manipulation by following a set of rules, syntax, is not sufficient for a computer to have consciousness since it does not assign meaning, semantics, to the symbols. When manipulating symbols, implementation-independence asserts that it does not matter the physical medium being used. This confused me when reading the 'combination reply' because Searle mentions that if a robot produces behaviors indistinguishable from that of humans, intentionality could be attributed to it. If this is true, could we not imply that the physical medium in which symbol manipulation is taking place would be a crucial component to intentionality and therefore consciousness? I am thinking that maybe because attributing intentionality to a robot is on the basis of external behavior and not its internal state. Although I am not sure.

    ReplyDelete
  32. The crux of CRA seems to be that cognition can not just be computation because there is a difference between something purely computing such as a non-Chinese speaker computing Chinese responses, and understanding such as an english speaker responding to English despite the inputs and outputs being indistinguishable in both cases to a native speaker. Searle seems to suggest that cognition can't have anything to do with computation but nothing about CRA proves this all CRA proves is that there has to be something more than computation but people could still very well compute and then do something else to ground the symbols in meaning. I.e manipulate the symbols 2+3 to get 5 and then do something else to have 2, +, 3, and 5 relate to the world and produce feeling.

    ReplyDelete
  33. I find it interesting that Searle takes on the position that none of cognition is computation (i.e., formal symbol manipulation). As professor Harnad said, it is quite clear that when we do mental arithmetic, we are computing, and therefore part of our cognition must be computation. But I think that Searle himself gives another example of computation-within-cognition when he says “he is answering questions as best he can by making various inferences from the content of the story” (6). As I understand, inference does at least in part rely on computation. Entailment, specifically, is a part of many of the inferences we make and can be reduced to a natural deduction system, which is clearly just formal symbol manipulation. And in a story as complex as the burger restaurant story, Searle must admit that some entailment must have been involved in the inferences that the man (the English subsystem) makes, and therefore that there is some computation that falls within his cognition while he is inferring. Unless I am mistaken in believing that because inference can be reduced to a natural deduction system, humans must be doing some formal symbol manipulation in an analogous way? Is there a way to imagine how entailment could be executed by our cognition that does not involve computation?

    ReplyDelete
  34. Frankly speaking, my first thought from reading Searle's article was: Why the debate? In the example of humans "understanding" Chinese in the isolated room, he made clear points about how we can 1) be a cognitive thinker in making real-world association from reading English 2) be as simple as a computer, reading Chinese and translating to English to understand the text, not by directly "understanding" Chinese. Hence, it is true that human brains on one hand can be a computer program, and on the other hand, is not.
    Moreover, I would not expect an AI "tool" to be able to understand in terms of being grounded to our reality. What makes human individuals differ is their experience hence perception or understanding - and whose should we ground the AI to, if we are to do so?
    That said, I would also like to learn if there was indeed any deeper context implied by Searle.

    ReplyDelete
  35. In this paper, Searle refutes computationalism (which he calls 'Strong AI'). He does so by virtue of the Chinese Room Argument, demonstrating that simply connecting the right input to the right output via some program (formal symbol manipulations) is not sufficient to produce understanding on behalf of the machine doing the symbol manipulations. He points out that there must be something humans are doing when they truly understand the meaning of a sentence as opposed when they simply learn the appropriate manipulations (the sole requirement of computationalism). Searle refutes computationalism in its entirety but Dr Harnad has very explicitly used the language in class: "cognition is not JUST computation". I agree that Searle was wrong to throw out computation entirely. It seems clear to me that humans take inputs and produce outputs along some set of rules, the fact that this is not sufficient for all aspects of cognition doesn't mean it has no role to play. Searle says this is because "the symbol manipulations by themselves don't have any intentionality [...] they aren't even symbol manipulations, since the symbols don't symbolize anything." (pg 11). To me, this is where he errs. Instead of totally throwing out computationalism, we could require that the symbols be correctly connected to the things they signify (in the real world) such that the symbol manipulations take on meaning. Could such a machine be considered to understand the symbol manipulations it performs?

    ReplyDelete
  36. Searle's key argument is that true understanding cannot be achieved by only processing symbols according to rules, even if that processing can pass the Turing test. In other words, he argues against strong AI. According to Searle, computers cannot be said to have consciousness or true cognitive states just by looking at their ability to manipulate symbols. He basically highlights the distinction between semantic comprehension and syntactic manipulation. For me, one of the most interesting parts of this reading was the Chinese Room experiment. I can see why it's frequently referred to as a key point in discussions about the philosophy of AI. When we look at it from an outside perspective, the responses generated in the Chinese Room seem like the work of someone with a genuine understanding of the Chinese language. However, the individual inside the room only follows the prescribed rules and lacks true comprehension of Chinese. This challenges the notion that a computer, while executing a program, possesses genuine understanding of the content it is processing.

    ReplyDelete
    Replies
    1. I also agree that the Chinese Room experiment has very solid implications on the limits of a computer's true ability to 'think' in any way that can be considered remotely human. It clearly calls the utility of the Turing Test into question, for any computer who is able to pass the Turing Test, or even a more modern test that is based on how similar a computer can behave as a human, can immediately be undermined by the Chinese Room experiment. How though, could we ever truly gauge if a computer program, coded using conventional methods, possesses genuine understanding? In my opinion a computer program coded by humans could never achieve such a feat.

      Delete
  37. My main takeaway from Searle's Chinese room, is that even if understanding/intentionality is computation, we will not find it through language
    Maybe it's because it's what most unique to humans (and we are an anthropocentric species), maybe it's because it's how we determine other humans to be conscious, but we seem to have an obsession in finding consciousness through language (I.E turing test)
    Searle beautifully demonstrated how language as a dependent symbol manipulation system of the mind, is not the same as the symbol manipulating system which brings us to understanding. Not all his refutations convinced me, particularly not his "water pipe" refutation, as to me, I did not see it a truly integrated system, nor his memorization argument, as memorizing a set of rules is not the same as being built from them, although I was still convinced overall of his case against strong AI.

    ps: I don't mean to demean linguistics, I still love it, but with language being a dependent system, it isn't sufficient for unraveling the mind system as a whole.

    ReplyDelete
  38. After reading Searle’s arguments and counterarguments, I understood that although machines are able to compute an output correctly, it does not necessarily understand the true meaning or semantics of the input, thus is not the same as human cognition. It lacks ‘intentionality’, in which he clarifies that it refers to the ‘feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states’. This is the hard problem that we have yet to solve, since we are still unable to explain our subjective experiences and perspectives.

    ReplyDelete
  39. Regarding the argument that "causal powers equal to those of the brain." This view deviates from previous reading. Suppose a model, for instance, a neural network, implements causal inference on specific tasks, a trending topic in modern neural networks. In that case, this may still only be T2 level TT, or not even close, that is, an email machine-like AI, which can achieve similar results to human/animal causal inference capabilities under certain circumstances but has a different function and structure. The modern development of AI is an excellent example to question Searle.
    Moreover, "if AI workers repudiated behaviorism and operationalism, much of the confusion between simulation and duplication would be eliminated." However, this article's machine based on causal inference also brings similar problems. Searle, like Turing, designed a relatively simple conditional judgment strong AI. A machine with causal inference capabilities may not be biologically feasible. The problem here is that an AI that can complete causal inference tests can be based entirely on mathematics, and In statistics, a causal inference machine that meets a single criterion does not need to pass the t4 turing test. Machines based on behavior can also reproduce people's casual abilities after collecting a large amount of statistical data. So, his argument has contradiction.

    ReplyDelete
    Replies
    1. Furthermore, here, what is Searl's point of view of "strong AI"? Is it cog science or an advance version of computer scientist but inspired by the brain, not statistics?

      Delete
    2. After re-reading 3b, I need to revise some of my understanding of cra and re-update some thoughts. My initial thought is one-sided. The example based on causal inference given by Searle is only a refutation and supplement to tt. It does not represent a new so called "searle test". Thanks to Dr. Harnard for his paper. The most important point is that even if AI completely simulates people, it may not be conscious after completing TT.

      Delete
    3. At the same time, it is only somewhat reasonable to place all bets on biology. At this stage, we cannot safely prove that human consciousness must rely on biological brains to exist. The results of emphasizing biological brains can also be refuted with CRA. Some Chinese People lose part of their ability to understand the Chinese language and physics after brain diseases such as stroke. Some studies can gradually restore it using technologies such as electrical signals and brain-computer interfaces. So, do these non-biological products show that consciousness can be stored into silcon-based beings? What about without the brain as a carrier? So, is there any difference between a strong AI at the t4 level and a Chinese who understands through silicon recovery? Did the Chinese lose consciousness at this time? It, therefore, makes no sense to bet everything on biology.

      Delete

  40. Hey Rosalie,

    I agree with you to some extent that we don’t fully understand human cognition and that because of that, it is hard to make sure that it is well replicated in a machine. However, I also believe that if there is one thing we know, it’s that we don’t know everything. If cognition as you describe it is also "a combination of contextual understanding, emotional intelligence, common sense, ethical and moral reasoning, insight, etc.” I don’t see why a machine could not have it. These features are, at least to some extent, measurable and could be assessed in a computer. I believe that even though we don’t fully understand our own cognition, perhaps trying to study/replicate it somewhere else could give us valuable insights.

    ReplyDelete
  41. I have a question about Searle’s conclusion that you cannot produce intentionality without having as a basis a replica of the human brain and neuronal connections. Could we broaden this claim and assume that Searle rejects dualism? He clearly states that ‘If you can exactly duplicate the causes, you could duplicate the effects.’ thus implying that the cause of our intentionality is those brain processes.

    ReplyDelete
    Replies
    1. Hi Aimee, yes I completely agree. I think it is more than fair to assume that Searle blatantly rejects the idea of dualism. At its core, dualism is the idea that the mind and body are distinct from one another, not the same entity. Famously believed by Descartes, he claimed that the mind or soul is totally separate from the body. In terms of Searle’s view point on this, I agree with you on him rejecting the idea of dualism. I believe Searle to advocate a more materialist approach of thinking. The fact that mental processes can be broken down to physical processes in the body (and the brain) and that the mind or soul is just a byproduct of the physical processes of the brain/body and their activities. In terms of his chinese room experiment, his view of materialism is shown through his explanation that symbol manipulation can be seen as a physical process and is not dependent on full understanding of said symbols.

      Delete
  42. I also agree that the brain has many complex components we do not fully understand. However, despite us not being able to grasp exactly how the brain works together as a system to create cognition, there is still a lot we can gain from attempts of modeling and reverse engineering certain processes or paths within the brain. For instance if we try to reverse engineer the way a specific neurodegenerative disorder affects the brain, we can work backwards from the incorrect “output” behavior to what input it was caused by. However to do so we need to know which brain networks normally correlate with certain behaviors, so this is not an implementation independent study. Nonetheless, by adjusting the structure of algorithms that model certain brain networks, that may help us learn which regions are responsible for the disease’s incorrect input-output mapping. Such an algorithm is used as a tool, like the weak AI Searle describes, however it may also have components of strong AI by applying computationalism to specific brain networks used for certain important processes, without attributing all of cognition to computationalism.

    ReplyDelete
  43. I have several reflections: the way I understand Searle’s refute on computationalism is that: by passing T2, it is a computation which is the manipulation of symbols. Even though he is able to finish the manipulation, but still he cannot ‘feel’ that he ‘understands’ Chinese, which means cognition is not computation, and passing T2 is not the critical criteria for cognition. I am thinking ‘understanding’ is also a weasel word, and the way you think you do not feel that you understand the language maybe wrong. Maybe the feeling of understanding is not well defined in the first place. I would argue that language mainly functions as communication tool, and the achievement of using Chinese to communicate successfully in the example is already enough to set the boundary of ‘understanding’, which means that Searle’s statement does not fully true.

    ReplyDelete
  44. Searle argues that since computers don't have any intentionality, we can't say that they have some form of understanding and of thinking. However, I must admit that I am confused about what is the main takeaway of this text. Is it that AI forms can't produce understanding ? Ot is it that an AI can produce understanding once it has passed the Turing Test ? Or even that a machine can think only if it has the same causal endings than those of a human brain ? If so, which types of causal endings are we talking about...?

    ReplyDelete
  45. At the beginning of the article, there is a sentence that impressed me deeply:
    “In strong artificial intelligence, because programmed computers have cognitive states, the programs are not just tools that enable us to test mental explanations; rather, the programs themselves are the explanations.” (Searle 2) I have never thought of this, and I found it is really a very interesting learning point. Artificial intelligence technology has now become so powerful that it has become such a simple version that it has gone from being a tool to being the representative of all explanations.

    In contrast, in the example behind the burger joint, there are two arguments, "1. that the machine can literally be said to understand the story and provide the answers to questions,and 2. that what the machine and its program do explains the human ability to understand the story and answer questions about it.” (Searle 2-3). This actually reminds me of when we chatted with AI customer service, whether it was product questions or product quality reports, AI's replies were very blunt and confusing, and often did not help us solve actual problems. They cannot provide practical help. Because artificial intelligence only understands textual expressions, it can not deeply understand the emotions and feelings that humans are familiar with.

    ReplyDelete
  46. In relation to the V. Other Minds Reply, Searle argues that the understanding of Chinese is dissociable from algorithmic symbol manipulation–though Searle can produce the “right” answer he lacks the understanding behind it. Searle argues that cognition is more than just computation, as it fundamentally involves intentionality (which comes down to feeling)–an aspect he argues that pure syntactic computation does not come to.

    ReplyDelete
  47. In Searle's Chinese Room experiment, I fail to understand how, if Searle were to memorize all the rules and understand the meaning of each symbol (i.e., which concept it refers to), Searle would still not understand Chinese. After all, isn't language learning primarily about acquiring the knowledge of which symbols represent specific concepts and how these symbols are typically combined to convey those concepts?

    ReplyDelete
    Replies
    1. Hi Liam, I agree that if Searle understood the meaning of the symbols, he would understand Chinese. However, from my understanding, in CRA the person inside does not understand the meaning of the symbols, they simply are able to correlate a certain set of symbols (the "story" batch) to another set (the "script" batch). But I don't think this correlation allows the person to associate the symbols with the concepts they refer to. They can only identify the symbols by their shape, but the characters are still squiggles to them. As Searle wrote, the person "produce the answers by manipulating uninterpreted formal symbols". I think the word "uninterpreted" suggests that there is no understanding happening.

      Delete
  48. The part that had me particularly engaged in this reading is the one relating strong AI with residual forms of operationalism and dualism. Searle first objects to the arguments that mental states in the computer are similar to human ones, arguing that they lack intentionality (as demonstrated through the Chinese room thought experiment). He then answers the claim that programs are independent of their realization as machines (which relates to the strong Turing/Church thesis) by asserting that mental phenomena are dependent on physical/chemical properties of actual human brains (a very neuroscience-oriented view). Is this claim convincing enough to account for the fact that all parts of our “intentionality” are directed by physical properties?

    ReplyDelete
  49. Personally, I found that my thoughts on the previous computation readings seemed to align with some of the themes found in Searle’s “Mind, Brains, and Programs,” especially in regards to his main points emphasizing the importance of subjective components to the mind like “understanding.” Firstly, Searle mentions the hypothetical situation of a “Chinese Room Experiment,” in which a person with zero proficiency in Chinese, is asked to “reply” to Chinese phrases in Chinese, by following a set of English instructions. In other words, this is meant to represent the idea of computation, as it involves the manipulation of a series of symbols. However, from this, Searle raises a strong question challenging Strong AI: can it really be said that the mind involves solely symbol manipulation, if there is no actual understanding of the language? It seems as though Searle is highlighting the point that the manipulation of symbols is far too objective to truly represent consciousness; “understanding” is a part of humans that can also be subjective in nature, given that, as Searle mentions in his critique of Schank, it requires making many inferences. This leads me to another question that reframes this debate in the context of free will: if a computer is simply following objective rules and cannot make decisions on its own terms/based on its own “understanding,” how then can it truly represent human consciousness?

    ReplyDelete
  50. Searle explains that a computer requires a programmer and an interpreter to make sense of the symbol manipulation it does. This reminds me of formal logic, whereby it works in a similar way. There are specific rules, the “syntax” of it, and the semantics are stripped away. After deriving a new formula, a person reinserts meaning and context into the symbols obtained.

    ReplyDelete

  51. In this paper, Searle uses the CRA to demonstrate that passing T2 through computation alone is not enough to explain understanding and cognition. I found it interesting when, in response to the question of whether an artifact, a manmade machine could think, he said-
    “Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes.” (6) If he is insisting that we can only recognize thinking through the exact duplication of the human brain, is he saying nothing else is capable of thinking? He gives no alternatives other than the obvious answer of replicating human anatomy down to the neurons, which seems like a bit of a cop-out.

    ReplyDelete
  52. It is interesting to me how there is so much emphasis on "understanding" in Searle's argument. Although he briefly addresses the arguments against the ambiguity of the word on p.4, it seems to me that there is a lot left to unpack.

    Ultimately, I suppose how we can answer these questions of whether or not "understanding" is a realistic attribution for these machines boils down to how we are defining the parameters of the questions themselves. The notion of “behavioral equivalence", first contextually brought up in Horswill's "What is Computation?", kept coming to mind as I read. That is to say, how relevant is it that, inside the room, the procedure in which Searle is aggregating Chinese symbols is completely different than the procedures used by a native speaker, if from outside the result is completely behaviourally equivalent?

    In the case of us cognizable humans, observed externally, are each of our minds not just a Chinese room to the other? Even in the simple example of performing mental arithmetic, two individuals can arrive at the same output while following different procedures. If the procedures were to have no overlap, but the input and the output were the same and procedures were not the particular point of discussion, does it really matter that different procedures were used?

    As I write, I'm already becoming aware that there are plenty of holes in this perspective, but I thought I'd post some of these undeveloped thoughts for my skywriting regardless, because why not.

    ReplyDelete
  53. Sorry for the late reply. I found Searle's response to the Systems Reply quite interesting. Searle argues that accepting the Systems Reply could lead to absurd consequences, such as attributing understanding to non-cognitive subsystems like the stomach. He challenges the notion that any system processing information, regardless of its nature, can be considered as having understanding. Searle's argument highlights the importance of maintaining consistency when applying the Systems Reply. If we accept the idea that a system's ability to process information and produce outputs implies understanding, then it logically follows that any system, including non-cognitive ones like the stomach or heart, could be considered cognitive. This challenges the logical coherence of the Systems Reply's premise.

    ReplyDelete
  54. Searle's Chinese Room Argument asserts that even if simulation perfectly imitates human behavior, it doesn't imply the replication of authentic mental processes. As explained in the text, a simulation is not equivalent to a duplication human consciousness if it lacks intentionality; if it doesn’t have a genuine understanding of the world. The Chinese Room experiment shows that superficially imitating human responses only copies the surface parts of thinking without grasping its core essence.

    This debate reminds us that the study of the mind goes beyond behaviorism. Just as Searle emphasizes the limitations of simulating human understanding through formal programs alone, relying solely on animal models may provide an incomplete understanding of the intricacies of human cognition. While behavioral observations and experiments on animals offer valuable insights, they may not always represent the full spectrum of human cognitive processes and consciousness. The biological and cognitive differences between animals and humans highlight the importance of being careful when using animal study results in human psychology. To truly understand, we should use various methods beyond just behavioral imitation. Moreover, as an animal rights activist myself, I believe that to consider the ethical dimensions of animal research should be essential, making sure that these experiments are done while respecting animal rights and reducing any unnecessary harm in our pursuit for understanding the human mind.

    Searle's Chinese Room argument emphasizes the need for psychological research to explore the complexities of the human mind and recognize that true understanding goes beyond superficial imitation.



    ReplyDelete
  55. Reading Searle’s paper finally helped me understand why many modern cognitive scientists argue simulating the brain on a computer would not be sufficient to generate intentional states. I previously understood the Chinese room problem, and knew that since the man inside only manipulates “squiggles and squoggles” the symbols remain ungrounded, so he lacks understanding.

    However, I didn’t understand how causal relations with the external world were required to generate the intentional states necessary to ground those symbols. By the end of Searle’s paper, this became clearer to me! Searle explains how Strong AI is fundamentally dualist, due to its insistence on implementation independence. Since this is the cause, there can be no causal relations between actual computational processes, and the computations being carried out. However, in the brain, there are causal relations between neurochemical processes and symbol manipulations, which allows for intentional states. Searle sums this necessity of causation for the existence of intentional states by claiming that just in the way we can’t make milk by perfectly stimulating lactation, we can’t make intentional states by perfectly simulating the brain on a computer. I think the key to understanding the implications of his Chinese room problem, for me, was his distinction between simulation and duplication.

    ReplyDelete
  56. The paper primarily discusses different philosophical setups, aiming to elicit philosophical intuitions about there being something more to "feeling," "intentionality," and "feeling-understanding" than mere computation. However, this philosophical intuition is often circular, begging the question without offering direct justification.

    One glaring issue in debates around consciousness is our limited understanding of the phenomena themselves—consciousness, qualia, feelings, etc. The paper's attempt to invoke a mysterious "other thing" beyond computation, purportedly essential for consciousness, smacks of an inherent bias to make human thought appear less mundane than it may actually be.

    I posit that the Chinese Room Argument (CRA) fails to demonstrate anything meaningful because it is riddled with begged questions. Many of its critics have pointed this out, and the paper's responses to these criticisms often reiterate the same fallacies. For example, one such fallacious claim is that causal reasoning can't be formalized within computational systems. This is directly contradicted by extensive research in artificial intelligence and machine learning, most notably by Judea Pearl. His work on probabilistic reasoning and causal inference showcases that computational systems can indeed capture, model, and reason about causality.

    ReplyDelete
  57. Reading the article I couldn’t help but think how complicated intentionality is to program in a machine, mostly because it’s contradictory. Our genetic code is similar to a computer program. If my hair is brown, it is because of this genetic code. My hair color is not intentional, it is a product of my genetic coding. I would imagine that the case for machines is similar. The “intentionality” code/ program of a machine wouldn’t make it intentional because it is still following the code that has been written by us.

    I would like to combine the reading with the idea of machine learning. Machine learning, the way I understand it at least, is a part of AI that enables the AI to adapt and learn without explicit programming. Could a weak AI, through machine learning, become a strong AI? In a way, it’s similar to evolution.

    Also in the article, there is a comment about learning and memorization(12). I think it can be argued that memorization is learning. I made a similar comment in one of the earlier readings: the way we learn is through association with the things we’ve memorized/learned before. Maybe just memorization on its own isn’t enough for learning without the ability to associate it, but it is certainly a big part of it. If we assume that my point is true, would it be possible for a machine to understand something if it is both able to memorize and associate?

    One last thing about the “programs are not machines” statement: a machine is defined by its program. I find it hard to grasp how we can separate them from each other. All aside, it was a very interesting read. Personally, I really enjoy reading theories and responses getting debunked.

    ReplyDelete
  58. I don't know why but for some reason the blogger keeps deleting my skywriting for this week, so here is a link to a pdf version of it

    https://pdf.ac/2rhB83

    ReplyDelete
  59. This comment was deleted from some reason, so I am reposting. While I did not enjoy Searle's superfluous writing style, I find his arguments relevant to our discussion on machine mimicry of cognition. Searle raises a good question: are we projecting our intentionality onto these tools or systems, mistaking sophisticated mimicry for true cognition?

    Searle argues that there's a difference between symbol manipulation or outputs in programs and "understanding" which he believes is unique to animals like us.

    Based on the Chinese thought experiment, he believes that there's an absence of genuine understanding of the Chinese language despite following the program. Comparing this notion to programs like Schank's, he argues that all the elements--the understanding of the language used for the program, execution of it, and output of Chinese--"by themselves have no interesting connection with understanding," and thus, he believes that these machines and programs do not 'actually' understand anything as well. He raises doubts about our intentionality, deeming them as metaphorical attribution.

    This thought experiment draws a parallel to machines that may successfully manipulate symbols (T2) but lack genuine understanding (T1) and intentionality. Thus, by his example of those programs, simply running a program isn't equivalent to human cognition

    ReplyDelete
  60. ** I think my week 3 comments were deleted so I am reposting them**

    I find Searle's critique of the Turing Test to be quite interesting and challenges a long-standing view in the field of artificial intelligence. From my understanding, the Turing Test was designed as a practical benchmark for evaluating a computer's ability to engage in natural language conversation. It was not necessarily meant to address the deeper question of consciousness or understanding. Searle's argument highlights the need for a more nuanced discussion about what it means for a machine to possess true understanding and consciousness, and whether a test like the Turing Test can ever adequately address these complex philosophical issues.

    ReplyDelete
  61. To begin, I want to mention how interesting of a choice chinese ideograms are, they’re symbols used to articulate the gist of things without denoting the pronunciation of the word. This makes me think of, as we discussed in class, that the shape of symbols in our language (words) don't map in any serious way onto their referents (that nothing about the word hippopotamus indicates that it should refer to a large creature I want for Christmas) I'm actually not sure about this point because we do have etymology, but I guess that this is circular as it only tells us about parts of words, and that its reasoning being grounded still in other words leads to an infinite regress. Moving on, Searle argues that because the person in the Chinese room could not hold a conversation without the book because they don't actually understand what the symbols mean, that the person is using the book to do computation and not cognition because of this understanding discrepancy, the two are different.

    ReplyDelete
  62. In the reading, Searle discusses that the idea of a strong AI does not require knowing how the brain works to find out how the mind works. I completely agree with Searle’s response to the argument, since simulating the normal processes of a human brain to an AI does not signify that the AI is strong. Although it can manipulate and reproduce those simple mechanisms according to the given manual, it cannot go beyond and further without the manual. It cannot give a spontaneous response with feelings, and would only imitate those ”emotions” according to the manual( if the manual were to lay out several sets of examples on how to react in various situations). Searle does add onto this argument by stating, “As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states”.

    ReplyDelete
  63. In part of the reading, Minds, Brains, and Programs, Searle mentions about how humans ascribe intentionality to domestic animals and primate species, but not to the robot. He explains that humans find it natural to ascribe intentionality to animals like those stated above, due to, “the coherence of the animal's behavior and the assumption of the same causal stuff underlying it”. Although I do think that Searle’s argument is valid, I do not think it is entirely due to “we can't make sense of the animal's behavior without the ascription of intentionality” nor their similar physical features to us, that justifies that they have intentionality. For example, in the case of dogs, their innate ability to protect things are shown as actions of barking, lunging, or stiffening, etc. They may perform such actions without much thinking, or intentionality, since it is their inborn instinct to do so. In my opinion, what shows that animals like dogs have intentionality, is through their communication with their owners. Dogs can show signs of emotions through their facial expressions and actions. Such emotional intelligence, can justify that dogs have intentionality in their actions.

    ReplyDelete
  64. Minds, brains, and programs by John Searle, formally introduces us to the concept of the “Chinese Room” experiment. Searle argues that the concept of manipulating symbols without understanding or being conscious of what is happening, is not strong AI. Seale considers AI as machines that have surpassed T4 level. I’m still not too sure what the difference is between the levels of T3 and T4 in this case. I really liked how the concept of semantics and syntax were differentiated. Do semantics even matter towards the goal of computation?

    ReplyDelete
  65. In Searle’s talk, he offers a solution to the hard problem or a way to approach ‘consciousness’ as he calls it. He defines 4 features of consciousness: qualitativeness (every conscious state has qualitative feels to it), which leads to subjectivity (it only exists insofar as it is experienced), unity (all its aspects are experienced as one subjective feel), and intentionality. The main argument he develops is about the neurobiological aspect of consciousness. He believes that it entirely caused by neurobiological processes and that, as any physical phenomenon that functions causally, we can develop a theory of why it works this way, and how the brain creates conscious feels, by finding neural correlates studying for example participants with blindsight (who don't consciously perceive a visual stimulus - unfelt process - even though it is integrated by the brain: they show the same outward behavior as controls without reporting seeing the stimulus).

    ReplyDelete
  66. In his talk, Searle addresses the complex issue of consciousness, defining it with four features: qualitativeness, subjectivity, unity, and intentionality. He argues that consciousness is a neurobiological process, suggesting that understanding how the brain creates conscious experiences involves identifying neural correlates. Additionally, Searle points out a common misunderstanding in the consciousness debate, distinguishing between objective and subjective claims. He explains that subjective experiences, like pain or itchiness, depend on individual experience, while objective entities, like mountains, exist independently. He asserts that it's feasible to objectively study consciousness, an ontologically subjective domain, highlighting the need for clarity in differentiating between epistemic and ontological aspects of subjectivity and objectivity.

    ReplyDelete
  67. This was a previouws skywriting that was deleted for some reason, I posted it on 15/09 but will repost for the record:

    This reading particularly interested me because it challenges the notion that strong AI can mirror cognition in the brain based on symbol manipulation alone. Searle’s application of this concept in the ‘Chinese room scenario’ yielded results that demonstrated that semantics alone cannot be expressed solely through the manipulation of symbols; he challenged the notion of computationalism, in which we highlighted in earlier lectures is the concept that cognitive processes can be perceived as computations that produce meaning. The idea that a machine executing the correct software is capable of having real brain states and comprehension is simply impossible, where he argues that intentionality and consciousness cannot be reduced into to elementary symbol manipulation, and that comprehending entails more than just syntax but necessitates a thorough comprehension of semantics, both of which he contends computers lack. My issue with this concept is that it seems relatively simplistic? It’s not very nuanced and the experiment puts really heavy emphasis on the individual in the experiment, and it does not seem fair to state that they understand the experiment, if they do not speak Chinese, cannot logically interpret or manipulate the language, then how does he argue that a comprehension of semantics is essential for cognition?

    ReplyDelete
  68. In "Minds, Brains, and Programs," Searle suggests that mere symbol manipulation, devoid of understanding, doesn't equate to consciousness. It's interesting to consider whether the medium of these manipulations plays a part in intentionality. This point touches upon a deeper philosophical debate of if we're to attribute consciousness to a system, is it enough to focus on its external behaviour, or do we need to delve into its internal state? As we continue to explore the realms of AI and consciousness, these questions become increasingly relevant, highlighting the need for a nuanced approach to our understanding of intelligence.

    ReplyDelete

  69. We’ve been tackling the tough questions around Searle’s Chinese Room Argument. The room shows us a robot that seems to understand Chinese but actually doesn’t. It’s like when you use Google Translate; it gives you the right words but doesn’t really “get” the language. This leads me to wonder, what if AI could somehow go beyond just following rules and start to show signs of real understanding? Imagine after a trillion sentences Google translate started to quickly pick up on nuances or jokes in a given language. Certain things that it wasn’t programmed to know it learns. Could we see a future where AI starts to understand things in its own unique way?

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2023 Time : 8:30 am to 11:30 am Place :  Arts W-120  Instructor : Stevan Harnad Office : Zoo...