Monday, August 28, 2023

10c. Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling.æ

Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012Summer Issue


The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).

129 comments:

  1. Professor Harnad’s summary of Turing’s work serves as a very concise form of the major topics covered in PSYC 538. This paper ties together many of the ideas we have covered in a way that is easy to understand and easy to see the connections between them. Before reading this, I did not fully understand the difference between the physical and mathematical variants of the Church-Turing thesis. In class we referred to them as strong and weak, but I think that physical= strong and mathematical = weak in this writing. One paragraph explains how the physical CT-thesis states that all physical processes can be simulated by computation but this doesn’t mean that the physical world is only computation, which cleared up some lingering confusion that I had between CT-thesis and computationalism (which states that all of cognition is computation).

    ReplyDelete
    Replies
    1. Megan, the Weak CTT is that what mathematicans are doing when they are doing a "computation" is what a Turing Machine does (symbol manipulation: what is that?). The Strong CTT is that computation can model and simulate (formally, symbolically) just about any object and property and process in the universe. What is the difference between a physical object (e.g., an icecube) and a computational model or simulation of it?

      Delete
    2. I think strong CTT aims to claim that computation, or mathematical and logical structure, directly share the same shape with the pre-determined physical structure of reality. However, it overlooks the complexities of physical objects and the potential underdetermination and inaccuracies the computational model or simulation would have. Physical objects interact with their environment in complex ways governed by physical laws. Computational models, however, can only simulate these interactions based on the assumptions of the model.

      Delete
    3. Evelyn, computational models are not iconic models. A simulated ice-cube is not the shape of an ice-cube. It is the "shape" of a computation -- an algorithm for manipulating symbols, like 0 or 1 (or Searle's "squiggles and sqoggles). Symbol shape is arbitrary.

      But the symbols and symbol manipulations in an (accurate) computer simulation (or model) of an ice-cube are interpretable by the user as the properties of the ice-cube (length, width, size, weight, temperature, shape, rate of melting, etc.).

      If any properties are wrong, or insufficient, the algorithm can be revised to bring them closer to the properties of the ice-cube: That's what is meant by "approximation" in the CT-T, which can always be made more and more fuller and more accurate (just as you can always make a verbal description more and more accurate. But the shape of the words themselves is not the shape of the thing they are describing!

      (Go back to Weeks 1-3 to make sure you understand what a computational simulation is and isn't. Test yourself by explaining how is is related to a Virtual Reality (VR) simulation, and to a computational model of a rocket you want to build on the basis of the simulation.)

      Delete
    4. I apologize that I misused the words here. I would like to say that the computation model could not have the one-to-one representations of the physical objects, due to the underdetermination of our theory and knowledge. I also think that the VR simulation is kind of illusion, merely based on what we could perceive.

      Delete
    5. Evelyn, the model cannot be exhaustive, but it can be made as close as you wish; but the "representations" are just squiggles and squoggles that are interpretable as a rocket, and that can be made, by a VR, to look, sound and feel like a rocket. What's the difference between a computational simulation and a VR simulation?

      Delete
    6. I think the main difference between computational simulation and VR simulation is about their mechanism. VR simulation aims to replicate the shape of physical entities but based on the experiences of human being. It is highly interactive and sensory. Due to this, on the aspect of VR simulation, we more care about its observable output - it is to engineer something - and how well the shape is similar to that entity in the real world. At least, mechanism of computation, which we more concerns about in cognitive science, is not the matter for VR simulation, as well as the outputs looks good and similar to the simulated environment.

      Computational simulation is data-driven, cause-and-effect system. Both structure and functions matter in this type of simulation It also seems like computational simulation is relatively more independent from human beings' perspectives and sensory experiences. (I feel like I just list the ideas I learnt from the slides and readings, I would try to figure out how to integrate them to answer this question, while looking for comments from others)

      Delete
    7. While I grasped the concepts of VR and the ice cube, I am still confused about how it could apply to a T3 operating solely on computation. In one of my previous skywritings, I questioned why a grounded T3 couldn't operate only through computation. It seems plausible that if all its sensorimotor capacities were translated into symbols, akin to how our receptors convert input to electrical signals, an algorithm could facilitate the manipulation of objects based on diverse robotic sensor inputs. The professor's response emphasized that a simulation of an ice cube melting is not equivalent to a real ice cube melting. However, why can't we have a T3 robot functioning in a non-simulated environment, and detecting a real ice-cube melting? Could it not have detectors sensing a real ice cube melting, manipulating these detections to generate an action — similar to how we interpret signals from somatosensory receptors? I fail to comprehend why the virtual melting of an ice cube would be pertinent in this case. Does this imply that a computational assessment of a real ice cube melting would render it virtual?

      Delete
    8. Hi Natasha, let me see if I can clear a few things up. Hopefully I do not say something that is incorrect.
      You sort of say it yourself a bit in your comment - it is plausible that through sensorimotor capacities, the sensory data could be translated into symbols, of which can be used in computational processes by the T3. This is completely plausible, you’re right, but the point to note is that you need a sensorimotor apparatus in order to have sensorimotor capacity in order to detect and translate the sense data. Computations are purely algorithmic. In a T3, it could be the case that computational algorithms is how the T3 facilitates the manipulation of objects, movement, speech, etc, but you still need an actual sensorimotor apparatus in order to carry out the DOing capacity.
      A computational assessment of a real ice cube melting wouldn't render the real ice cube virtual - the only thing that would be "virtual" is the computational assessment. The ice cube is very real.
      The virtual melting of the ice cube applies here because VR is perhaps the extent of computation. The closest computation can get to a real ice cube that is melting is VR. If we’re talking about computationally simulating an ice cube, this is the closest we can get to it being “real”. If we’re talking about a computer modelling an ice cube, and then eventually printing a real ice cube (made of H2O) via the steps outlined in an algorithm, then we are no longer talking about “just a computer” and “just computation”. As soon as the machine has real DOing/producing capacity, we’ve left the bounds of “just computation” as a machine would necessarily need a sensorimotor apparatus to have these capacities.

      Delete
  2. The article allowed me to clarify what the physical Church-Turing Thesis (CT) is. This thesis is an extension of Turing's original conception of computation. The concept is based on the belief that nearly any physical dynamical structure or process can be simulated and approximated by computation to a high degree of accuracy.
    However,if I understood correctly, the physical CT-thesis does not imply that everything in the physical world is just computation. It recognizes that a computational model can closely approximate physical processes, but this doesn't mean that these processes are inherently computational in nature. For example, a computer simulation of an airplane can accurately represent various aspects of flight, but it is not the same as an actual airplane. In essence, the physical CT-thesis highlights Turing's belief in the power of computational models to simulate complex physical processes, while also acknowledging the distinction between simulation and reality.

    ReplyDelete
    Replies
    1. Amélie, correct, but there is nothing subtle about the fundamental difference between an object and a simulation of it (except in a Virtual Reality, where you have to take off your gloves and goggles to see the difference).

      Delete
  3. This paper was able to describe in a very clear way the implications of Searle’s Chinese Room argument and how it relates to the easy problem. Searle was able to conduct this thought experiment in order to show that cognition is not all computation as is suggested by computationalists. This was because although Searle could carry out the same memorized steps, he was not understanding Chinese while doing it because of a lack of symbol grounding. What I think is an interesting insight is that Turing was not a computationalist in this way. Rather, he would’ve known that symbol grounding and in turn sensorimotor systems were needed to pass his verbal test. My question is then, if he knew that they would need these sensorimotor capabilities anyways, why is the test designed only for verbal communication?

    ReplyDelete
    Replies
    1. Hi Jenny! I believe that Turing initially created the test for solely verbal communication in order to emphasize the importance of performance capacity to answer if machines could think (testing performance capacity, as it can be measured, rather than 'thinking'). Therefore, I think that testing verbal communication is just the starting point for Turing! I think that in order for a machine to pass a TT (even if it only required verbal communication) they would still need sensorimotor capabilities to perform as a human does.

      Delete
    2. Jenny, Shona's reply is right. The original verbal T2 test was just for pedagogical purposes -- has ChatGPT passed it? -- just as the unfortunate title "Imitation Game" was.

      Delete
    3. As many of my peers have mentioned, this paper serves as a great summary of all the big topics we have encountered so far in the course. It also serves as strong evidence for T3 as the level of Test Turing really intended (or should have) in its emphasis on the importance of sensorimotor dynamics. I say really intended in that to pass T2, at least T3 is necessary since it cannot definitions all the way down; the symbols from which we build the rest of the definitions must be grounded direct experience. I say should have because this fact is not apparent to most, as Harnad as indicated, causing people to dwell only on computational capacity rather than robotic. This clarification, however, does not bring us closer to solving the hard problem, due to the OMP. If we cannot even be sure that other people can feel, we are confronted with the same problem when dealing with reverse engineered robots. This intuition is further purported by Searle’s CRA, in that even if (computational) simulations can be accomplished, it does not necessarily guarantee (a feeling of) cognising.

      Delete
    4. Jocelyn, the OMP is not the HP. Explain the difference.

      Delete
    5. As for Professor Harnad’s question on ChatGPT, I think that it cannot be seen as a successful reverse engineering of total indistinguishability in verbal capacity since the Big Gulp is not realistically what humans do in order to perform the same way; just because it is totally indistinguishable doesn’t mean that is the ways that we do it. The purpose of the Turing test (T2 specifically, in this case) is to bring us closer to solving the problem of cognitive science, which is the easy problem of how and why we can do the things we do. It is not to demonstrate that replications are possible, especially not by using resources that are not realistic to the human experience. However, its ability to do so despite not doing it in the same way we do does have implications on the symbol grounding problem; is direct sensorimotor experience inherently necessary or only in the context of humans since we are incapable of the Big Gulp?

      Delete
    6. The OMP is that one can never be certain that anyone but themselves feels, whereas the HP is about how and why we feel.

      Delete
    7. Jocelyn, what is Strong and Weak Equivalence? Does Turing call for Strong or Weak Equivalence. But Turing does call for Total Indistinguishabiliity: you can't have that without T3 symbol grounding.

      Delete
    8. Weak equivalence is the same output for the same input (i.e., I/O equivalence), where strong equivalence is the same output for the same input in the same way (i.e., I/O and algorithmic equivalence). Turing only called for weak equivalence, evident in his emphasis on performance capacity rather than how that is programmed.

      Delete
    9. Speaking of strong and weak equivalence, the difference between these two kinds of equivalence is whether or not the same way is used between the same inputs and the same outputs, which is considered to be underdetermination (underdetermination). While underdetermination is only a goal for computationalism to attack, it is also a causal explanation for the know-how that the reverse engineering process wants to discover.

      Delete
  4. This reading provides a good summary of most of the topics that we have discussed in class so far. Firstly, it discusses the great contributions of Turing to the field of cognitive science, with his TT. Next, it highlights Searle’s CRA addressing the OMP, which is about knowing that other people think and feel. Moreover, it explores the fact that compared to the easy problem of how and why we can do what we can do, the hard problem of how and why we feel what we feel is hard to solve, and potentially even insoluble.

    ReplyDelete
    Replies
    1. I agree! as a lot of people would have said, it would have been a great summary for the midterm. It touches on every point clearly and how different ideas unfolded. It first speaks about Turing, and how he was a computationalist but he did not necessarily think that cognition is just computation. And I hadn't fully understood that before. He never thought that the turing test would help solve the hard problem.

      Delete
    2. Marine, exactly ! From my understanding, Turing was simply hoping to answer the Easy Problem since the Hard Problem is not even possible to answer with the use of an algorithm/computation. Indeed, Turing’s method focuses only on what is observable, therefore testable, such as the action of doing. He could not even try to answer the Hard Problem since feelings are not observable, hence testable with answers to some questions.
      I agree with you and with everyone that this text is very kid-sib, and would have helped for the midterm. :)

      Delete
  5. This 3rd reading of the week, in my opinion, is a very clear and concise summary of the course. It touches upon every big question we have studied, TT, symbol grounding problem, cogito, … (I would have loved to have read it before the midterm). That aside, the question still remains: how do we feel? I do not know if we are ever going to answer that question but I have two new ideas that could help. Why – feelings can be seen as a form of feedback in order for species to survive (evolutionary pov). And for TT, what about non-verbal communication, can we really reverse engineer without this part of communication?

    ReplyDelete
    Replies
    1. Hi Garance, I agree this was a great summary of what we have learned so far (answering the midterm questions for us), and also captured Turing's great contributions to the field of cognitive science (as Shona pointed out below). I also agree that feelings must have been more evolutionarily adaptive rather than maladaptive, otherwise they would have been lost throughout evolution, but I don't think this fact gets us much closer to the "how" or "why" of feelings unless we can explain what the adaptive advantages were. This goes beyond explaining adaptive advantages of emotions or moods, but must explain why it feels like something to understand, or be hungry (rather than just automatically seeking food when our stomach is empty), etc. I'm beginning to grasp why it has been suggested that the HP is insoluble.

      Delete
    2. Garance, nonverbal communication is part of T3. (Do you think you would have fully understood without the Chapters on computation, Symbol Grounding, Categorization, Evolution and Language?

      Jessica, good grasp.

      Delete
  6. I thought this reading was really useful in summarizing the main aspects of the course. I found that the added part about the link between Searle’s Chinese room experiment and Descartes’s cogito ergo sum quite fascinating. While it is quite a ‘primitive’ argument and very philosophical, it is very enlightening to look at the main issues of cognitive science through that lens. And in its simplicity, I can conceive quite easily that it could be seen as a foundational argument for the study of cognitive science.

    ReplyDelete
    Replies
    1. Aimée, fine, but read the other replies of Week 10 too.

      Delete
  7. This article combines several topics we have covered in this course and shows them in a format that highlights the connections between them. It’s a great summary. Harnad explores a wide range of subjects that we discussed previously, including the implications of Searle’s CRA , the symbol grounding problem, the "easy" and "hard" problems of cognitive science, and more. It highlights the current struggle in cognitive science, noting that while progress is made in explaining behavior, understanding the subjective, the feeling side (the "hard" problem) is exceptionally challenging, perhaps unsolvable. To put it simply, Harnad points out the limitations of a purely computational approach to explaining cognition, questioning the link between symbolic manipulation and genuine understanding. It stresses the ongoing challenge of tackling the "hard" problem of consciousness, recognizing that the subjective dimension of cognition remains a complex puzzle.

    ReplyDelete
    Replies
    1. Hi Julide! I also found that this essay was a great summary of this entire course, and emphasized the important connections between each of the topics we have discussed. I think that the structure of the article highlights and clearly defines the pertinent problems which must be solved by cognitive science, and some of the past attempts the field has made to answer these questions. Finally, I thought that this essay was very effective at putting Turing's immense contributions to the study of cognitive science in context.

      Delete
    2. Julide and Shona, sounds like you are up to speed with this course. Now put it together with language (Weeks 8 & 9) and week 11 (the OMP and the meaning of life...)

      Delete
    3. I am having trouble finding something original to say for 10c. This reading integrated what we have seen so far, adding feeling into the picture. A significant addition is that now we see that the reason Searle knew he did not understand Chinese is because it feels like something to understand Chinese. This means feeling is an essential difference between computation and cognition. This is applicable to language because it feels like something to understand a language, and as for UG, we seem to have an internal and perhaps biological FEELING of when UG is used correctly (because we never use it incorrectly, it is plausible we feel it).

      Delete
    4. Nicole, our sense that a UG violation is wrong is felt, just as an OG violation is felt. The difference is that we learn OG whereas UG is just doing the job for us without our feeling a thing except the reult, the way Ohrie remembered Mrs, Lawrence,

      Delete
    5. Hi Professor Harnad. I think that the discussions of language, as it relates to this article and the greater aim of the course, is that human language is a central cognitive process which must be explained and understood for cognitive science to answer the easy problem, and if we are to ever create a fully T3 passing robot. I think language is also particularly important due to the similarity between language and computation (as discussed in class as language as a grounded form of the Strong Church/Turing thesis, and computation as a subset of language). The topics covered in week 11 (the OMP and the meaning of life) highlight that what matters is feeling, as feeling is what drives the other minds problem (we cannot know how another organism feels) and the hard problem (why do we feel?). I think that because feeling cannot be modelled, and we cannot know how it feels to be another person or being, feeling is a salient problem for cognitive science.

      Delete
    6. I agree with everything Shona said. I wanted to expand on OMP. The main challenge of OMP is the difficulty of ascertaining the feelings of others, particularly in species lacking verbal communication. This underscores the inherent complexity. After I read the relevant articles, I realized that language abilities kind of alleviate the OMP within our species. This allowed me to redefine OMP’s scope. I realized that the problem is more pertinent to other species. Therefore, relying on mind reading for understanding the felt states of non-human organisms introduces challenges, thus making the OMP more salient in the broader context of cognitive science.

      Delete
  8. As everyone is saying above this is a great summary and would have been great prep for the midterm. One thing that this reading cleared up for me was that Turing was a computationalist, in that he thought just about any physical, dynamic structure could be modeled computationally (physical version of the Church-Turing thesis), but he did not necessarily think that cognition is just computation. In order to pass the TT the robot would have to be a sensorimotor robot, capable of doing more than the verbal TT by drawing on grounded symbols. And, the the successful TT-passing robot may or may not feel, but that is the HP and the consensus so far is that we won’t be able to solve it. What I would be interested in knowing is once we're able to create a physical TT-passing robot (assuming this is possible), if we were to interact with it would it make us feel things similar to how other feeling-beings make us feel?

    ReplyDelete
    Replies
    1. Hi Fiona, what level TT-passing robot are you wondering about? I’d argue we already have ChatGPT and other forms of AI that definitely pass T2. I also think interacting with any TT-level robot would instill some sort of feeling into us, since simply existing is to have felt experience. People fall in love every day through the internet to long-distance lovers they’ve never seen (or bots)…

      Delete
    2. Fiona, good question; I think Kristi's answer is right, but I'm not sure ChatGPT passes T2...

      Delete
    3. Kristi, I actually don't think Chat GPT passes T2, it makes many errors humans would not make, mostly related to the symbol grounding problem. That being said, I agree with what you are saying about T2, T3, and T4 robots being able to cause feeling into us. So much of our communication in our relationships with friends, colleagues, romantic partners, etc. now takes place over text and email, and we all feel things from those interactions. I don't think Chat GPT could produce real feelings in me the way a human does (at least not with the 3.5 version I'm used to), but a fully T2-passing robot probably could, especially if I was not aware it was a robot.

      Delete
    4. Adrienne, the solution to EP haas nothing to do with feelings. Producing feelings is the HP. But the one that needs to feel is the T3/T4, not the user (us). (I'm sure ChatGPT can produce the feeling of frustration in you, yet ChatGPT doesn't feel anything: But does it even pass T2?

      Delete
    5. I know it is not the solution to the EP, I was only addressing the last part of Fiona's question of whether a T2-T4 passing robot could elicit feeling in us (not the other way around). Chat GPT certainly doesn't feel, and I don't even think a T3 robot necessarily does either, because feeling is more than sensorimotor capacity. Chat GPT does not pass T2 because its words are ungrounded, therefore it does not have the same verbal capacity as people.

      Delete
    6. Adrienne, right. But inducing feelings in us is just a symptom of our mirror-capacity misfiring...

      Delete
    7. Hi Prof, I’m not sure I see the error of my ways saying that ChatGPT passes T2. T2 is verbal indistinguishability but not grounding, since grounding can only be addressed with a T3 robot that has sensorimotor capacities to interact with its environment. Doesn’t that mean a T2 robot is ungrounded (as per Adrienne’s comment)? This is why the CRA proves that a T2 robot can pass the Turing Test without grounded symbols (i.e. no meaning). Please correct me if I’m wrong but did you mean that ChatGPT may not pass T2, not because of grounding, but because it’s verbal output is not indistinguishable from a human? Because that I agree with, it still sounds like a robot. But we can have some sort of “conversation” with it as if thru email.

      Delete
    8. Kristi, you ask a veri valid question,

      I happen to think thst GPT is not yet T2-indistinguishable, but, more important, I think it's cheating, because of the Big Gulp, just as a student would be cheating if instead of responding to an exam question they submitted GPT's response instead.

      This raises a huge number of questions for cogsci and the reverse-engineering of cognitive capacity.

      If T2 is passed by a computer, it is not a robot; it is just a computer, executing algorithms on its input.

      The Big Gulp is not just a computer, computing; it's an enormously large database of words that have already been written by thinking (cognizing) people. The status of that database in the test is unclear.

      Is it part of the testee (i.e., the entity whose capacities we are testing)?

      Or is it part of the input to the testee?

      Is the database serving as an "oracle"?

      Then is the testee passing T2 or consulting an oracle? or a library? Is looking up something in a book thinking? Looking up something on the web?

      These are questions that could be raised about GPT.

      But others may disagree with me that GPT is cheating. These are still early days.

      Other questions are specifically about reverse-engineering for cogsci: If GPT does count as passing T2, has cogsci thereby reverse-engineered human language capacity?

      Again, I think not, because swallowing and processing the Big Gulp is not part of human DOing capacity

      And, most important of all, can GPT be grounded, so as to reverse-engineer T3 capacity too -- which, as we know, is part of Turing's criterion of Total Equivalence and Indistiguishability with human DOing caoacity?

      Direct grounding is bottom-up, from first learning the sensorimotor category to baptizing it with an arbitrary name ('cat").

      But the only possible way to ground GPT's Big Gulp is top down, from the word down to the sensorimotor category.

      So the mentions of "cat" already in the Big Gulp have to be grounded in the T3 robotic capacity to recognize and interact with their referents, ee.g,, cats, indistinguishably from any of us.

      But, to make that possible, "cat" has to be connected to its referent, and so do all cats' distinguishing features, which are other content-words that are already in the Big Gulp. How does top-down "grounding" connect with bottom-up grounding?

      It is clear how to connect to ground bottom-up, by first learning to categorize, by learning to do the right thing with the right kind of thing, learning to detect their distinguishing sensorimotor features (say, to distinguish cats from dogs) through unsup and sup sensorimotor learning, and then naming that category "cat". But that is bottom-up learning, not top-down. "Dog" and "cat" and "whiskers" and "tails" are already up there in the Big Gulp, but not grounded. How does the bottom-up and top-down grounding happen? In parallel? One by one?

      And we can't forget that there are two kinds of grounding: direct sensorimotor grounding and indirect verbal grounding. How that works bottom-up is clear, but how would it work top-down?

      These questions are beyond the requirements of this course (and most of them are beyond me too, since the Big Gulp and GPT are so new). But they definitely need to be confronted if cogsci wants to consider whether GPT passes T2.

      These questions are of course no problem for the question of whether GPT is a useful computational and database tool for humans to use.

      But that question is a lot easier than cogsci's EP.

      Delete
    9. Hi Prof, thank you for your reply, it has been very helpful for understanding your arguments for why Chat GPT does not pass T2. I will address your points and ask another question in the following:

      - Indeed, the Big Gulp shows that Chat GPT’s computations are not truly representative of human DOing. We cannot swallow the Big Gulp, whereas GPT has access to, theoretically, infinite information.
      - However, how do we avoid creating a system where we are only evaluating the capacities of the consulting oracle? First, we would need to reverse-engineer a system where we are not simply giving access to the Big Gulp. This is not human cognition and, further, is just a projection of our own abilities (the oracle). But then we would also need a system that undergoes a similar development as humans, where it would be able to learn through trial-and-error categorical perception or via language abilities?

      - We can both agree that Chat GPT cannot be grounded as to create T3 indistinguishability since it does not have sensorimotor abilities. Thus, to learn in a way that is similar to human cognitive capacities, GPT would have to ground new categories thru indirect verbal learning.

      - My question then becomes; would this represent Turing’s T2 verbal indistinguishability? Turing posits that reverse-engineering cognition would go as follows: Verbal indistinguishability > sensorimotor indistinguishability > internal structure and function indistinguishability > biorobotic indistinguishability. However, I think Turing has things in the wrong order. It seems that to reverse-engineer cognition we need to have some form of sensorimotor grounding FIRST, AND BEFORE verbal capacities because for anything to be learned using language, the referents used to describe a new category need to be grounded first.

      - So, as you asked, has GPT reverse-engineered human language capacity? In a way that is not just the Big Gulp or the corresponding oracle? In this vein, GPT must learn through top-down connection between the word and the sensorimotor capacity. BUT, how do we get the abstract word/category and its referent to connect, if none of the content words/features described are grounded to begin with?
      - We know symbol grounding occurs bottom-up, from sensorimotor shapes within our world to the abstract words/shapes of their referent (seemingly Turing’s T3 sensorimotor indistinguishability). But interestingly, indirect verbal grounding (seemingly Turing’s T2 verbal indistinguishability) NECESSITATES some form of sensorimotor grounding first. Unless cogsci still has something to discover in terms of connecting top-down to bottom-up grounding.

      - Nevertheless, with Chapt GPT, it seems we’ve once again skipped over a critical aspect of human cognition which is GROUNDING, MEANING, FEELING.
      - Thus, I agree that GPT does not pass the T2 or T3 Turing Test. But I am also confronted with the fact that Chat GPT is a Zombie, not because it is distinguishable in its verbal capacity, but because it is distinguishable in its sensorimotor capacity (which is nonexistent), which makes its verbal capacities unlike ours.

      Delete
  9. I don’t have much to add to what everyone else is posting except I find it ironic that we are blasted with CORRELATION DOES NOT EQUAL CAUSATION throughout our psychology bachelor’s degrees, yet it seems that’s all were really investigating in the present-day field of cognitive science.

    ReplyDelete
    Replies
    1. Kristi, not quite. In other fields of science (physics, engineering) explanation starts with observing correlations, but it leads to testable causal explanation. Same is true in cogsci reverse-engineering for the EP. It just fails (so far) for HP. Please see the other replies.

      Delete
    2. Hi Prof, I concur that both the Hard and Easy Problems are empirical problems that need to be addressed. I think it’s helpful to be reminded that there is still immense value in investigating the Easy Problem, even if it cannot (at this time) answer the Hard one. Otherwise, it’s easy to feel like cogsi has gone off the beaten track. Alas, the field is presently investigating how and why we DO what we do, but not yet how and why we FEEL what we feel. Cognition (thinking) cannot just be computation (symbol manipulation) because we experience what it feels like to interact with our world thru sensorimotor perception. What is the causal role of these correlated feelings?

      Delete
    3. Kristi, not quite. Cogsci can answer whether we feel THIS or THAT. That's just DD's "heterophenomenology", which is part of the EP. The HP is a lot harder than that.

      Delete
  10. I felt like this reading was a good summary of the first part of class since it puts together the important concepts that we have seen. I found the section that talked about Turing not being a computationalist quite interesting. At first glance, one would think that Turing believed that cognition is computation, but as professor Harnad says in the reading, Turing was not a computationalist; he did not believe that cognition could be explained by computation alone: Turing was actually aware that to pass the verbal TT the candidate system would have to be a sensorimotor robot. To me this just shows how brilliant Turing was because he was capable of thinking past the technology that was available at his time. Another thing that I noted from the reading is that the hard problem is unsolvable, or at least for now it is unsolvable. This is because we cannot reverse engineer feelings, at least not with the technology we have now. Indeed, the reverse engineering of human or animal capacity is only concerned with doing, not feeling.

    ReplyDelete
    Replies
    1. Valentina, good summary. But there are disagreements: Some people think eveything (including icecubes) IS a Turing Machine, so that would mean that cognition, too, is just computation. And there are plenty of people who think they have solved the HP (usually with weasel-words...)

      Delete
    2. Yes, and thinking rocks/ice cubes are Turing machines is not what Turing said or intended. The Church/Turing thesis states that everything can be simulated by a Turing machine, but again as highlighted in class, a simulation of an ice cube is not an ice cube(it would require the same materials and properties to be so). Simulating feeling however, I feel is not possible, and this invokes the OMD. If you created a machine that is sentient, how am I supposed to know it is? It may be convincingly so, and have the functional correlates of feeling, but that is not the same. Even if we could simulate feelings, it doesn't solve why or necessarily how we have them(HP).

      Delete
    3. Josie, but how could you simulate feeling, rather than just the correlates of feeling?

      Delete
    4. I think what Josie was getting at is precisely that - if you simulated correlates of feeling, the result may be akin to someone who feels but you would be unable to know if you had actually produced feeling due to the OMP.

      Delete
  11. This paper really ties together everything we have seen so far in the course. Harnad demonstrates that Alan Turing likely wasn’t a computationalist, who thought that thinking was just computation. He likely understood that sensorimotor robot capacities were necessary when thinking, that some symbols (ie: words) need to be grounded through sensorimotor experience (ie: pointed to and named by someone else) for them to be meaningful. To successfully reverse-engineer our cognitive capacity, a system needs to go beyond indistinguishability in verbal exchange, as is the case in the Turing’s “Imitation Game” and the T2. Passing the T3, total indistinguishability in sensorimotor performance capacity, will bring us closer to the goal of solving the easy problem, of understanding how and why we can do what we can do (ie: thinking, understanding…). However, Harnad argues that we are still very far from solving the easy problem, and even more-so the hard problem, on how and why we feel.

    ReplyDelete
    Replies
    1. Anaïs, good summary. Turing showed he was not a computationalist the moment he adopted the criterion of totally indistinguishable performance capacity, since robotic (i.e., sensorimotor) capacity cannot be just computational.

      Delete
  12. This paper does a great job of neatly summarizing the material thought within the course thus far, and I think the ideas covered in the paper can be captured by the line that says that “generating the capacity to do does not necessarily generate the capacity to feel”. This quotation neatly draws a distinction between the Easy Problem, here answerable by the Turing Test, and the Hard Problem, which is not so easily answered. I’ve often thought about the relationship between doing and feeling, and the idea that consciousness and feelings emerge from the collective functioning of the brain. Perhaps the capacity to do does generate the capacity to feel, but more as an emergent byproduct rather than anything else. Since we haven’t yet created a machine capable of passing the Turing Test, I wonder if such a machine would have the capacity to feel as a byproduct, and if feeling would be an emergent capacity, or if it would have to be a requisite capacity to pass the Turing Test to begin with. Food for thought, more than anything else.

    ReplyDelete
    Replies
    1. Stevan V, good points, but "emergent" (W-W) byproducts of "spandrels" did not fare too well with UG. Considering the scope of sentience (feeling), there must be more to it than "a mysterious side-effect of EP-capacity."

      Delete
  13. As everyone has said before, this paper pretty much ties together the concepts that we have learned in class. I think it was pretty interesting to read how Harnard explains Turing’s point of view, since before this, I would have assumed that Turing was a computationalist. Other than that, I think it was pretty refreshing to see all of the concepts we’ve seen in class as components in a bigger picture, bringing us back to the main focus of cognitive science, or the main question that we’ve yet answered — how and why we feel what we feel.

    ReplyDelete
    Replies
    1. Hi Selin,

      I completely agree with you. The paper is a good summary of all the concepts we've covered in class. I also assumed Turing was computationalist, so I re-read some of the readings about the Chinese Room Argument which proved that a machine could not pass the TT purely by computation and that therefore cognition could not be only computation. I think the only person we learned believed that cognition was computation was Pylyshyn. I could not find any specific information on Turing’s computational beliefs, so I was quite surprised as well to discover that Turing believed there was something else too.

      Delete
    2. Selin, and what about the EP?

      Lili, and what about all the commentators who disagreed with Searle and proposed the "System Reply"? Computationalism was pretty much the prevailing view at the time, and for many still is (lately revitalized by the surprising success of ChatGPT...)

      Delete
    3. The systems reply proposed that in the CRA Searle is part of a larger system which includes his materials (pencil, paper) and Chinese algorithm and that the system as a whole understands Chinese. Computationalism is embedded in this argument because it is considering the algorithm and materials equally part of the Chinese computation as Searle himself. That is, if one were to refute computationalism, and argue that mental states occur in the human brain separate from the computations it carries out, it would be quite difficult to argue that paper or the rules themselves could undergo the mental state of understanding. In order to consider all of these aspects part of the same, computing and understanding entity, one would have to assume that understanding is embedded in computation.

      Delete
  14. It’s hard to find something to add to that very clear and synthetic summary of Professor Harnad’s perspective about cognitive science. But since this essay was written in 2012, I wonder if his position have evolved on certain topics so far in light of research progress. This doesn’t really seem to be the case since this is almost exactly what he’s teaching us except regarding the impossibility for a computational model to pass T2 considering that some grounding would be necessary.

    The point that is still not very clear to me is what makes us think Turing is not a (strong) computationalist. Harnad posits that “He [Turing] was perfectly aware of the possibility that, in order to be able to pass the verbal TT (only symbols in and symbols out), the candidate system would have to be a sensorimotor robot.” However, Turing said in 'Computing Machinery And Intelligence' that digital computers are universal machines; therefore, it is unnecessary to design other machines to pass the TT. If a digital computer alone could pass the TT, it means that sensorimotor capacities are not required. Besides, I don’t think he really acknowledges the fact that what computers are doing is just simulation and not the real thing. In my opinion, Turing had the intuition that there was something missing in computation, but he probably didn’t have the time to investigate and formalize it.

    ReplyDelete
    Replies
    1. Joann, well, MinSets came since 2012; so did Deep Learning, and ChatGPT. Yes, digital computers are computers, and of course Turing believed the CT-T, weak and strong.

      Now, please explain the difference between (1) an ice-cube, (2) a computer and (3) a computer simulation of an ice-cube. (It's not that Turing didn't have time to explain this: he saw it was obvious.)

      Delete
    2. I think it's crucial to differentiate between physical objects and their computational representations. An ice-cube is a tangible entity with specific physical properties, such as temperature and form. A computer, in contrast, is a device for processing data and executing computations. A computer simulation of an ice-cube represents these physical attributes digitally but lacks the ice-cube's actual physical characteristics. Regarding Turing's computationalism, while he recognized digital computers as universal machines capable of passing the Turing Test, this doesn't necessarily imply that he dismissed the importance of sensorimotor capabilities in cognition. Turing's focus was on information processing, and he likely understood the limitations of computational models in fully replicating human cognition, recognizing the gap between symbolic computation and the richness of human experience.

      Delete
  15. Similar to other students, I was interested in the discussion towards the end of the reading on Turing Testing and the limits of reverse engineering in explaining the hard problem. In my week 10b writing I asked about if, in the process of making a T3/T4/T5 robot, we added or removed something that made the robot feel or no longer feel, it would be possible to say that this thing is responsible for feeling. To this, professor Harnad said the following:
    "But even a T3 together with a divine certification that it feels would only explain how and why it can DO what it can do, and not the fact that it feels. If you removed or altered some widget that did not alter anything the candidate can DO, externally, and now you got a divine certification that it does NOT feel, that still would not explain how or why it feels. It would just show that TT could be passed without that T4/T5 widget."

    I found this to fit in line with the last two paragraphs of the 10c reading, in which professor Harnad states that regardless of the TT-passing model feeling, the explaining power of TT remains only for performance. To be honest, I still find it hard to understand how removing or altering a widget that does not change the candidates performance but changes its feelings would not show how or why it feels. I see how it shows that TT could be passed without that T4/T5 widget, but I still don't follow how it would not provide any meaningful understanding for how that candidate can feel.

    ReplyDelete
    Replies
    1. Hi Omar, I think the purpose of the T4/T5 widget analogy is to show that the hard problem will not immediately be solved by identifying what part of the brain, if any exists, is correlated with feeling. It’s the same argument as with mirror neurons — we know that mirror neurons fire when mirror capacities activate, but this doesn’t tell us whether mirror neurons are causing the mirror capacity (as opposed to mere correlation), or how/why. With feeling, the problem is made even more intractable by the fact that feeling itself is unobservable to everyone besides the subject. Hence our reliance on “divine certification” just to be able to draw this analogy.

      Delete
    2. Adam, that's right.

      Omar, your puzzlement is understandable, but it is really only a puzzlement about HP if removing the F-widget (with divine intervention to overcome the OMP barrier) did not alter the T3 capacity. Does that explain how and why we feel, or does it make it even more mysterious?

      Without divine intervention, cogsci would conclude that EP and T3 or T4 or T5-capacity was all that Darwinian evolution needed, with or without the F-widget (and the F-widget would be interpreted as just one of the vegetative functions of T4/T5). If cogsci tested some of DD's HTRPHENO correlations, they'd notice that the activation of the F-widget was correlated with reports of feeling -- but would that explain how or why the F-widget causes feeling? (We all already know the brain does produce feeling: cogsci is just asking how and why. Reducing it to a brain widget does not explain it. See the "claustrum nostrum" link.)

      Delete
  16. This reading was a good summary of important points we've covered so far in this class. The essence of understanding involves a subjective experience that computers lack. They follow algorithms, providing outputs based on inputs, with responses categorized as either correct or incorrect. Machines are unable to be conscious of the nuanced feeling of comprehension or lack thereof. It feels like something to understand and it feels like something else to not understand: computers are not capable of doing that. A Turing machine operates on symbol manipulation and computation without the need to understand the meaning of the symbols; it focuses on what actions to perform with them.

    ReplyDelete
    Replies
    1. Rosalie, machines can feel, because we are machines that can feel. (What is a machine? See Weeks 0-2,) The question is only what kinds of machines can feel: computers can't, not even if they can pass T2. (Why not?).

      Delete
    2. The reason why computers that can pass T2 have no feelings is because machines that can pass email version TT are only guaranteed to have speech abilities that are indistinguishable from humans. However, T2 machines are unable to interact with objects, actions, and events in the world and are therefore incapable of perception.

      Delete
  17. I was most interested in the connections between the strong CTT and the hard problem that were made explicit in this reading. We agree in this course that cognition can not just be computation, Searle shows this and a pure Turing machine isn't going out into the world to avoid eating poison mushrooms (so the hard problem and the symbol grounding problem). But if we take the strong CTT then we should say that while pure computation can't actually cognize it can model cognition but where does that leave us with feelings? Can we computationally model feeling?

    ReplyDelete
    Replies
    1. Marie, no, partly because of the OMP (feeling is observable only to the feeler) and the EP (once cogsci solves the EP, there are no causal degrees of freedom left for explaining feeling). Can you explain to kid-sib why modelling feeling is not like modelling an icecube (or a T3, T4 or T5)?

      Delete
    2. We can model an ice cube, or T3-5, because we understand how the components of those things give rise to the object itself. The coldness of the ice cube, the molecules of H20 that compose it, the shape, all are things we understand and can use to explain why an ice cube is what it is and, critically, does what it does. Same with the Turing bots - we know how they are made, and what components of them we have designed to allow each type of functionality. But that doesn't tell us about emotion, or the HP - we know that the ways T3-5 does what it does are /a/ way of executing certain functions, but we 1. don't know if it's /the/ way that conscious beings do it and 2. don't know if these bots are /feeling/ how we feel when we do these things.

      Delete
    3. Madeleine, what is Strong and Weak Equivalence. Does the Turing ask for Strong or Weak Equivalence? (BTW, emotions are far from being the only internal states we feel. What are others?)

      Delete
  18. As others have noted, this was a great summary of the course thus far. As the content was clear and any questions I had were already asked by others, I thought it would be interesting to touch on what Descartes concluded from the cogito himself. The few times I’ve read Descartes’ Meditations, I wholeheartedly agreed with everything up to and including the cogito, and found the rest of it to be nonsense. Descartes established the cogito as a fundamental truth that cannot be refuted: He can doubt everything except for the fact that he exists as a thinking thing, or as Prof Harnad put it, one can be certain that one is cognizing when one is cognizing. This was established as a fact, but so what? For Descartes, it “proved” mind-body dualism and the existence of God. This conclusion was, and remains to be, utter nonsense to me. But being in this class, I digress I now have a little more respect for Descartes. We do not understand how and why we feel any more than Descartes did in the 17th century, so I can see how slapping on the existence of God as a solution was such an attractive idea. To me, this is just further proof of the insolubility of HP, at least as we know it now.

    ReplyDelete
    Replies
    1. Paniz, my guess is that Descartes pretended to belieave in God to avoid political persecution. But his mind/body dualism licensed a lot of cruelty (including Malbranche's). And the Cogito certainly did not put science on a solid foundation, as intended (the pineal gland? nonhuman animals are zombies?). (The best way to state the Cogito is "I feel, therefore therefore there is feeling.")

      Delete
  19. Dr Harnad's inquiry into the nature of an ice cube, a computer, and a simulated ice cube presents an insightful thought for me. An ice cube represents an object composed of water in a frozen state. In contrast, a computer functions as a device capable of processing and executing calculations. A computer simulation of an ice cube, while lacking physical form, can emulate the behavior of an actual ice cube within a virtual environment. A simulated ice cube can reflect the properties of a solid ice cube in terms of physics and appearance, but it does not have the sensory or emotional attributes that a conscious being would. This may be the difference between the easy problem and the hard problem. The difference is that what we can do can be observed, and feeling can not be exactly observed and measured, but it exists objectively. Therefore, we can solve easy problems through reverse engineering, but we cannot solve hard problems.

    ReplyDelete
    Replies
    1. Jiajun, what is a "virtual environment"?

      Delete

    2. Hi Jiajun, my understanding of the ice cube story was slightly different. First, I think that whether the simulated ice cube is “lacking in physical form” is not a very relevant part of the argument. (Also, if “the simulated ice cube reflects the properties of a solid ice cube in terms of PHYSICS and appearance,” I don’t feel like it could be lacking in physical form in the first place?) Second, I also don’t fully agree on the statement that “A simulated ice cube… does not have the sensory or emotional attributes that a conscious being would.” This would be true even for a real ice cube, because no ice cube - real or simulated - are conscious beings so would not have any sensory or emotional attributes in the first place. If my understanding of the ice cube argument is correct, I think it’s just an illustration of the Strong Church-Turing hypothesis: that the fact that if you can perfectly simulate an ice cube - its coldness, the way it melts, how it looks - does not imply that a real ice cube is a computer simulation.

      Delete
  20. This article is basically summing up what we have been learning over the course of the semester (TT, CRA, EP, HP). The new part which I was not expecting was claiming that Turing himself was not a computationalist (arguing that all cognition is computation). Harnad explains that Turing probably understood there was more to cognition but finding an explanation to how and why we can do what we can do (EP) is the feasible part of the explanation using computation and the TT (which was only verbal when he came up with it, T2). Harnad says that Turing likely also understood that a passing TT needs symbol grounding (aka sensorimotor capabilities, T3). What we don’t know is whether this passing T3 would have the capacity to feel like we do (OMP). Even if we get to a passing T3, this doesn't help up with the HP (how and why we can feel).

    ReplyDelete
    Replies
    1. Kaitlin, and even with omniscient divine guarantees that T3, or T4, or T5 feels (or doesn't), the HP is not solved.

      Delete
  21. This paper does a good job in answering some of our midterm questions, and summarizes/integrates the topics we have learned in class so far in a way that is easy to understand even if the reader has no prerequisite knowledge about cognitive science. It starts with explaining what the study of cognitive science is and distinguishes the easy problem (explaining doing) vs hard problem (explaining feeling). It then mentions the contributions of Alan Turing to this field, most notably the Turing test, and how the Chinese Room Argument by Searle relates to it, as well as Descartes’ “Cogito” . Harnad comments on how he believes that Turing is not truly a computationalist despite his arguments making it seem like he is. He admits the limitations of being unable to solve the Hard Problem and how explaining our doing capacity is already the best we can do.

    ReplyDelete
    Replies
    1. Andrae, nor does Turing believe that everything is a Turing Machine (just that everything is simulable by one: Strong CT-T).

      Delete
  22. As every other comment points out, this paper summarizes the big discussion points of this class. It goes over computationalism, Searle’s argument against it as well as the easy and hard problems. Once again, I find the example of the simulation of the real world to be the most interesting argument against why not everything can be computation. Yes, we can simulate a plane flying, or an ice cube melting, but that doesn’t do anything for us in the real world, we don’t experience anything within a simulation. To boil everything down to computation would be ignoring the importance of sensorimotor capacities and our ability to feel. I found the end passage most interesting where Harnad mentions that Turing knew that reverse engineer our capacity to do (which is what the Turing test is designed for) would not at the same time reverse engineer our capacity to feel. Although these two aspects of cognition seem so intertwined, we can only seem to solve half of the problem.

    ReplyDelete
    Replies
    1. Hi Ethan, I completely to agree to your points. I would further in contrast to Dennett's heterophenomenology, which does not directly address or even acknowledge the HP, Turing's strong thesis falls short in suggesting that feeling can also be simulated (like all other things or processes) hence explained.
      In relation to language, a word that describes feeling can be grounded, but it still doesn't explain why we're feeling that - as you've brought up Searle's argument. One thing I seem to make a better sense of, is that the focus now is not just 'how' (causal or not causal models), but also the matter of 'why' (functionality of feelings on top of what EP studies: doing capacity).

      Delete
    2. Ethan, your summary is correct, but please confirm for me that you understand the difference between a computational simulation of and ice-cube and a VR simulation of an ice-cube.

      Kristie, I think Ethan noticed, but you didn't, that in his 1950 paper in Week 2, he made it clear that he did not think that either computer simulation or Turing Testing could capture or explain "consciousness" (feeling). Please find the relevant passages to confirm you have understood. (It was in the section where he used the misnomer "solipsism".)

      Delete
    3. Hi Professor. If I am not mistaken, computational simulation of a plane or ice cube has symbols and states that are interpretable (by users) as having every relevant feature of a plane or ice cube cleaner, and it even of explains how these objects work. It is the computer program that contains a model of a particular system (in this case plane or ice cube), and this computation can be executed, and the output analyzed. But the computer program cannot fly the plane or be an ice cube melting in my palm.

      VR simulation would be taking the output of the computer program and moving it into VR goggles and gloves. While wearing the equipment it may fool my senses that ice is melting or that I am flying a plane, but the moment I take off the device, nothing has been accomplished in the real world. I haven’t really flown a plane nor has an ice cube melted and chilled my real hands. All that is left once I take off the gloves and goggles is the VR hardware and nothing more.

      Delete
    4. Ethan, your point of the distinction between computational simulation and VR simulation is spot on. It's why not everything can be reduced to computation despite everything being able to be used for computing. While computational simulation can model and explain the workings of objects or processes like the Easy Problem, it doesn't provide the actual feelings that we encounter in the real world. VR simulation, as you rightly pointed out, can momentarily trick our senses but ultimately leaves us with nothing tangible in the physical world. This insight aligns with Harnad's argument and emphasizes the significance of sensorimotor capacities and our ability to feel, which cannot be fully captured by pure computation.

      Delete
  23. As Professor Harnad suggested in many answers to the comments on this article, we are now able to make a connection between language and feeling. It won’t necessarily help us in solving the hard problem but can still give us another distinction between computation and the brain. As we are thinking or speaking, we feel something. As mentioned during the lecture about language, even if we are unable to explain why, UG gives us a path of what can and cannot be said in a language we understand (OG as well, but as it’s learned I would leave that aside since feeling is innate), it feels right or not, whereas computation can explain exactly its results based on the symbol manipulation leading to this output.

    ReplyDelete
  24. Beside this discussion regarding the easy and hard problem, it seems like even if we were to solve the hard problem, we would be lacking words to describe the solution. Maybe it’s just one of the limits of the English vocabulary, just like some numbers can’t be expressed in some specific languages, but how could we precisely communicate about something leading to the feelings? How could we describe to a zombie what it feels like to feel? If we take vision for example, we have the physiological explanation, we can tell how we feel and what’s the brain circuitry but even if we had the causal explanation in hand, i don’t conceive any other way of communicating what’s happening than putting the interlocutor under the same sensory stimulation, and commonly agreeing on the fact that receiving a punch hurts, and seeing something blue feels like seeing something blue.

    ReplyDelete
    Replies
    1. Good questions. How would you solve it without language though having the vocabulary though? Or rather, how would you know you had solved it? I am not sure you can solve the hard problem without language: it is about explaining how and why we feel, that seems to imply you must be able to express that explanation. What is an explanation if not intelligible by ourselves or others? In that sense, I could say I have solved the hard problem but I just can't tell you how... and where would that leave us?

      Delete
    2. I think your second point about vision is a great example of the OMP. Indeed, it shows that there is no way of knowing what someone else feels or experiences other than actually being them. Perhaps mirror neurons and empathy are an approximation.

      Delete
  25. Humans have feelings all over the place, which means EP is related with HP in every aspect; what is even worse, OMP is blucking current approaches to HP, leaving HP seems unsolvable. On the other hand, it seems that each casual relation between DOING and EP could exist, and this leads to the conclusion that by simply solve the EP, there is no space leaving for HP -- and leads to the feeling that HP should not be considered in the first place. Arbitrary or even contradictive ideas rises through the process.
    Personally, I think the confusion arised because of the ambiguity on the WHAT problem. From the EP side, we can clarify WHAT is EP to certain degrees, however, we do not understand WHAT is feeling in the first place. EP can be solved if we understand WHAT first and then we deal with HOW and WHY questions later, and this logic should be applied to solve HP in the first place. It seems that even before we understand WHAT is feeling, we skip the core and beg for the answers of HOW and WHY, which creating a huge gap between and leaving too much space for speculating and guessing.
    And in order to understand WHAT is feeling, maybe we need to address with OMP first? Or maybe there could be other approaches to skip OMP and lead to empirical experiments of feeling?

    ReplyDelete
  26. I found it interesting it was pointed out that a computer simulation of an ice-cube and a VR simulation of an ice-cube are not the same as the real thing. While a computer simulation can help test and design ice-cube prototypes computationally, without having to physically build and test them, it is not the same as the real thing. Similarly, a VR simulation of an ice-cube can generate a virtual reality simulation that the human senses cannot distinguish from the real thing, but it is still not the same as the real thing. This is because the experience of feeling the coldness of the ice-cube is not just a matter of computation or simulation, but also involves a subjective feeling or experience that cannot be fully captured by a computational or VR simulation.

    ReplyDelete
  27. This paper was a great review of the main topics of the course so far. I was most interested in the final section, when Professor Harnad references “computations cognizing” and Descartes. In our recent discussions of the hard problem – explaining how and why we feel – I was a little confused as to what we meant by feeling, whether it was just emotions, or the things we notice about ourselves via introspection. This paper, as well as our discussion in class on Friday about felt states, immensely helped my understanding. As Descartes’ Cogito informs us, the feeling we’re interested in is what it “feels” like when we cognize; it’s difficult to explain, but we know when we are.

    ReplyDelete
  28. As others have mentioned above, this last reading was a good summary of the concepts we have learned up till now. One of the fascinating parts for me was when Professor Harnad describes the term “Computations Cognizing”. He mentions how Searle knows that he is not understanding Chinese even when he is passing the Chinese TT. He explains that when someone understands something, there is a feeling aroused, and whether or not this feeling is happening or not is only known by the cognizer himself, in this case being Searle. Then Professor Harnad links this to Descartes’ Cogito, ergo sum (I think therefore I am), which proves one’s existence by oneself being a thinking being (“I can't doubt that I'm cognizing when I'm cognizing”). After reading this, it once again showed that although the Turing Test explains the doings and doing-capacity, it does not answer the hard problem of how and why we feel. I wonder if this is a question that can be answered in the near future.

    ReplyDelete
  29. This reading was a great summary of all that we have learned in the course. I wish we were able to study more of these concepts earlier on in order to get a better grasp at what the goals are in cognitive science. The one concept that I am still a bit confused on from this reading is cogito. From my understanding, it’s the idea that we can doubt everything, as we cannot be sure it exists, but we can be sure that this thought exists… Can anyone maybe clarify this.

    ReplyDelete
    Replies
    1. "I think, therefore I am." This statement anchors our existence in the certainty of our thoughts. Even if we doubt everything around us, the very act of doubting or thinking is proof of our existence. Because while everything else could be an illusion, deception, or dream, the thinking itself is real and undeniable. It's the foundational truth that we exist as thinking beings. This is the starting point of Descartes' philosophy: our consciousness, the undeniable fact that we are thinking entities. Whether or not the external world is as we perceive it, the experience of thinking is indisputable evidence of our being. Hope that helps.

      Delete
    2. It is important to note that "cogito ergo sum" in many readings doesn't necesarilly imply an "I" only a "I-am" or "I-am-thinking" (hyphens included). I like the alteration professor Harnad offered in class of "Sentio ergo sentitum", I-feel therefore I-am-feeling (in third-person singular present passive indicative).
      It's hard to illustrate in english, as the structure of the languages are different, but the "I" in "I-am" is joined with the am.
      In portuguese (modern latin more of less) it's: "Penso então sou" not "Penso então eu sou". the addition of "eu" (I) slightly alters the meaning, positing an I.

      Delete
  30. As mentioned already, this piece was a great summarization of the main important topics regarding the course thus far. One topic that has been on my mind for the semester is the symbol grounding problem. I think it's a really fascinating idea regarding consciousness and the human predicament. The following question came to my mind after the reading: How does the concept of the symbol grounding problem challenge our approach to AI development? More importantly, what could it reveal about the limitations of artificial systems in understanding human semantics? Even if certain AI created could pass the Turing Test, it would then remain unclear on whether it would be able to truly grasp the meaning behind words and concepts without direct experiential grounding. Taking GPT for example, it spews out information without being able to truly grasp it or understand it. Despite this inability, how much better do you think Chat GPT would be if it passed the symbol grounding problem - something that is inherently human.

    ReplyDelete

  31. As this article presents, the turing test has not gotten us any closer to solving the hard problem of how and why we feel. However solving the easy problem has already led to many advancements such as chatGPT and applications of AI in health for example. Although such applications do not have grounding or cannot attribute meaning, they are extremely useful tools to sentient humans. But is solving the hard problem through reverse engineering something that would be useful or beneficial to human society? There is indeed a use in the human intellectual pursuit and curiosity in the hard problem, but what would happen if we did solve the hard problem and applied it to AI, and what would that mean for humans? If AI and LLMs like ChatGPT could indeed attribute meaning and feeling the way humans do, what role and purpose would that leave for humans in society? if humans no longer had the burden of making the hard ethical decisions and feeling the feelings we that we do, what meaning and motivations would we have?

    ReplyDelete
  32. Much has already been said and I very much appreciate reading this article after participating in learning what it’s saying for the past weeks: it feels very familiar and easy to read. That being said, (trivialness alert!) if it was as we say and Turing was a computationalist in the general sense (physical CT) then cognition would be (far down the line) reducible to computation. If all is simulable in the universe by turing machines, that does not leave cognition out of it (unless one says that cognition is not in our universe but what are we even talking about at that point). That is to say cognition and all of its necessary parts (the sensorimotor grounding aspects) fall into the physical which the strong CT says that it can be simulated by computation. The reason I say it’s trivial is because I feel this is a pedantic point but I just wanted to make that clear for myself.

    ReplyDelete
  33. This reading made me realize that I had not fully clicked with a fundamental part of the course. The weak CTT (or the physical CT as it is referring to in the reading) had always confused me because it did not seem consistent with what we had concluded in the course: if computation can simulate thought, doesn't that mean that Turing was a computationalist? Doesn't this make him at odds with Searle's CRA? I had not fully computed the fact that by simulating, he meant just that: simulation. This is why Searle can pass the TT in chinese, because, using computation, he is simulating cognition. I know this sounds silly, especially since we covered it so long ago, but this was really an Aha! moment for me.

    ReplyDelete
    Replies
    1. Hi, I actually had the same realization. It was a bit confusing for me throughout the course that Turing was not a computationalist about cognition but a computationalist in the general sense of the physical version of the CT. It made me realize, just like you had that, simulation is just simulation and nothing more. Simulation is not reality. Computational simulation is merely the execution of a computation containing the necessary symbols and rules to “simulate” its desired output. VR simulation on the other hand is just an execution of a computer program with other gear that could potentially fool the senses of the user but once the gear is taken off, there is no sign that the simulation had ever taken place in any part of reality.

      Delete
  34. Turing argues that the closest we can get to explaining cognition is to explain the capacity of a creature to do. The CT is not meant to completely explain cognition, so much as it is a demonstration of the limits of how much we can understand of the matter. We can use the TT to determine whether an entity has the capacities we ascribe to cognizing beings, but we cannot use it to determine anything further than that. Part of the power of the CT for guiding our understanding of cognition and the intersection of cognition and computation is by delineating the limits of what we can understand by computation alone. The CT does so by highlighting what a TM can do (any computations found in the real world). This, in turn, clarifies what a TM cannot do (the tangible, real-world phenomena that arise from those computations – as in, the ice cube melting).

    ReplyDelete
  35. In this text, Harnad provides a great, comprehensive overview of what we have been learning throughout the course. Even though I am a fourth-year cognitive science student, it wasn’t until this class that I truly understood how impactful and revolutionary the TT was for the field of cognitive science. I had heard of it in a few classes, but it was never explained how much of a role it played in how cognitive science came to be, so I enjoyed a further reintegration of that in this text! Harnad also explained that he did not believe Turing was a computationalist when it came to cognition, but more so in the general sense. This distinction was helpful because I didn't initially consider that possibility!

    ReplyDelete
  36. This was a clear and concise summary of many of the key concepts convener in this course, so clear in fact that I sent it to my Dad and to my roommate!

    What stood out to me was Harnad’s explanation of the physical version of the Church-Turing Thesis. I think it is particularly interesting to note that if the physical version of the CT is true, we could theoretically perfectly simulate all physical phenomenon computationally, however, we have to keep in mind that the perfect computational simulation of the thing is not the thing itself (eg. a perfect simulation of a plane flying london-NYC is not a plane actually flying London-NYC).

    This raised some interesting philosophical questions for me. First is Leibniz’ identity of indiscernibles as applied to the Turing test and the physical CT. If there is no distinguishable feature of a simulation of a thing from the real thing, is it ever possible to say that the computational model of (say a plane) and a real plane are actually identical? If I put on a full body VR suit and take a VR flight london-NYC and the flight is indistinguishable from actually flying London-NYC - by Leibniz’ identity of indiscernibles, can we say that the two experiences are actually the same? Second, I want to introduce some Plato to this. For Plato, the representation/simulation of a thing (say of a flight) is less real than the thing itself, since it is ontologically dependent on the thing itself. For example, a simulation of a car is less real than a car, because the accuracy of the simulation is entirely dependent on something outside of itself - the real car. However, if the simulation of the thing is entirely indiscernible from the thing itself (in sort of a lifetime TT manner) has the simulation freed itself of its ontological dependance on the thing it simulates, and taken up a life of its own?

    ReplyDelete
    Replies
    1. Hi Daniel, I found your philosophical connections to the CT interesting and it prompted some questions for me. The CT says that any physical, dynamical structure can be approximated in a simulation–but only approximated I think. Would the two planes differ in that the VR flight’s physical features originated from the VR suit (there are no actual seats you sit on, a door you enter from, or emergency exits you can escape from–without the suit that is)? Your sensorimotor experience in the VR flight and the flight that is actually flying to London is different because you cannot physically cover any ground when walking in the plane for instance. Would there not always be a distinguishable feature in that it requires you to put on a VR goggle in the first place? In terms of your second point, if someone made a physical and identical copy of car A, and then an identical copy of car A in a simulation, the physical copy and the simulated copy would be different because one is digitized and one has physical components. Would the simulated car have freed itself from ontological dependence because it is a separate object, (with different materials that made it up but I do not think that even matters), even if it was dependent on the thing it simulated? On what grounds would it be less real (what is real)?

      Delete
  37. To observe the causal role of specific brain areas, neuroscientists often remove or deactivate a particular part of the brain. For instance, if a certain brain area, like the hippocampus, known for its role in memory formation, is deactivated or damaged, and we then observe a significant impairment in the ability to form new memories, this provides clear evidence of the hippocampus' crucial function in memory processes. This week exposed us to the real challenge: even if we somehow managed to remove or deactivate a feeling whose function we do not know, we still would not know the causal role of that feeling (how and why it feels).

    ReplyDelete
    Replies
    1. Hi Miriam
      That's in itself is the essence of the distinction that lies between the EP and the HP where prof. Harnad states, "Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel". Without a doubt we can reverse-engineer physical mechanisms to replicate them and create a T3 robot, I would argue that it's the easy part of the process, hence why the EP is, well, easy! I think we start to face difficulties in creating T5 robots because in order for them to be indistinguishable to the form of a human, we need to know how and why we feel, we need to solve the HP, to replicate the Hp, and we still aren't even close!

      Delete
  38. This reading is a summary of the catcomcon course!! Turing's demonstration of the EP and computationalism, the implications of Searle's argument and it's foundation that is the basis of the OMP, as well as a clear distinction between the EP and the HP when Prof Harnad states (what i believe is probably one of the best quotes in this paper), "Turing was perfectly aware that generating the capacity to do does not necessarily generate the capacity to feel". I think this helps me understand that the TT was never actually "flawed" like I believed it was. I think I might have overgeneralized what the aims of TT was, I thought that if reverse engineer how and why we do, it can help us to understand how and why we feel, but it was a lot more complicated than that! and I'm really glad that turing addresses this! It helps us understand that just because the EP is potentially solvable, it does not guarantee that we can solve the HP!

    ReplyDelete
  39. The idea that to pass TT, a sensory-motor grounded robot (T3) is required, can begs the question of whether behaving in the same way as a human would, requires feeling at all. If we assume that T3 is enough to pass the TT, or even that we require a T4 robot to pass the TT, then by default, if the robot behaves undistinguishably from humans, for a lifetime in the way they think and act, couldn’t it presuppose that those thoughts are felt? If they aren’t felt, would the behavior be totally undistinguishable as if they were? Turing would argue that capacity to do doesn’t imply capacity to feel. But it is a question that I ask myself.

    ReplyDelete
    Replies
    1. Hi Mitia, I believe the TT was designed to solve the EP and not the HP. I don’t believe capacity to do implies capacity to feel as, in theory, the TT can be passed with computation which does not require feeling (as seen to a degree in the Chinese Room Experiment). However, I do believe this in itself gives some (limited) insight into feeling and the HP.

      Delete
  40. Prof. Harnad’s paper provides a concise and useful overview of the topics covered in class thus far. In recent classes, Harnad has recurrently asserted that the HP is likely insoluble. This paper, among much of the other material we’ve covered, inclines me to think the same. To claim to know why not is likely just baseless as claiming it is soluble, but my intuition tells me it stems from the property of feeling that clearly and consistently eludes definition. Even as the feelers of our own feelings, if we focus on what is doing the feeling when we are feeling, a dead end is reached very quickly. This isn’t to say that because we cannot introspect the causality of our feeling, we cannot eventually do so through more hopeful approaches, but it merely highlights the innate allusivity of the of the phenomenon. Moreover, the topic seems to only introduce more questions whenever a potential answer seems remotely eminent, often introducing a feeling a of infinite regression for myself.

    ReplyDelete
  41. I liked that this paper circled us back to the beginning and followed a clear path from where we started to now. It begins with Turing machines and computationalism, and how the Turing test is not a game or an act but an attempt to address the easy problem. Turing machines operate on arbitrary symbols based on a given set of rules. In order to understand these symbols, they need to be grounded through sensorimotor capabilities. Searle’s Chinese room is an example of a successful input/output operation but the system lacks any understanding of the given symbols, thus he is not cognizing. This paper gave a nice, simple synthesis of many of the foundational elements of the course.

    ReplyDelete
  42. The idea of a simulation of a thing not being the thing is really clearly laid out here in the plane example, one may say it is plainly exPLAINED. Anyway, I think the most popular exemplification of this is “ceci n'est pas une pipe”. This is something we’ve known since 1929, that the representation of a thing is not that thing but just a simulation of it. This somehow becomes more confusing when we add bells and whistles to it, when we make it as if it feels like we are smoking une pipe, this is because our experience is maybe what is most important to us, so how could our experience not influence us to mistake a virtual reality for a reality.

    ReplyDelete
  43. This article was a very succinct summary of the course. Seeing Descartes' "Cogito" mentioned in the past three readings, I've been thinking back to the reformulation of Cogito that we saw in class. "Cogito ergo sum" really isn't as tautological as it is often taken to be. From my understanding (is this correct?): I really can't be sure that "I am": what is this "I" and what does it mean for "I" to "be"? I can think, yes, but any content of my thoughts that is grounded in the physical world and the sensorimotor apparatus by which I perceive that are not certain just because I think. This is because, as we know, we cannot be certain of the physical world itself, and our sensorimotor apparatus, our body, is also just a part of the physical world. So we really can only be certain that we feel: "sentio ergo sentitur".

    ReplyDelete
  44. The concept that lingers in my mind the most is the question: what is the benefit of having feelings? As it seems like we can survive fine without them. But since they are not discarded in evolution, there must be some use to it. I guess this is why feeling is the hard problem, since the doing capacity can be explained based on the functions and result of them, the feeling capacity is mysterious. Turing proposes a total indistinguishable of the candidates from humans, which arguably cannot be achieved using computation alone, because humans do more than just computation. However, reverse engineering with feelings involved is much harder, as we know so little about the function, origin, and observational data of feelings. If one day, a candidate is successfully built, indistinguishable from human, with a perfect algorithm simulating feeling reactions, will that be satisfying?

    ReplyDelete
  45. Our brains are like a world of neurons firing off signals, creating many shiny thoughts. Thinking is this mysterious symphony of brain cells doing their thing, and we're trying to catch them in the act. Even though we can't put thinking under a microscope, we've got some cool ways to peek into the brain's playbook. We're looking at brain activity, running computer models, and checking out what happens when we put our thoughts into action.

    Continued progress across many disciplines studying the brain, mind and intelligent systems will help elucidate core processes like representation, learning, and decision making that constitute our ability to think. Understanding thinking has implications beyond scientific curiosity - for building more effective AI, treating cognitive disorders and maximizing human potential. So it remains an important challenge.

    ReplyDelete
  46. As everyone have mentioned, this reading provided a great and concise summary of the course: Starting with Turing’s invaluable contributions for the early roots of cognitive science and its aims, which is explaining the causal mechanisms of or DOING (EP) and FEELING (HP) capacity (i.e., cognition). To accomplish reverse-engineering these capacities, a Turing Test that is indistinguishable (regarding the things that it can DO) from a human must be engineered. If this is accomplished, we can have a clue about what are the causal mechanisms that give rise to how and why we can do the things we do. However, as a counter-argument, Searle proposed the CRA where he showed how cognition cannot be just computation (manipulating symbols), since he cannot show an understanding of Chinese, even if he can successfully manipulate symbols (words) based solely on rules. Consequently, the Symbol Grounding Problem emerges, regarding how symbols acquire their meanings. For a symbol to have meaning, sensorimotor experience is required, thus robotic capacity is at the forefront when we are thinking about passing a TT (i.e., explaining cognition). However, this is about the Easy Problem, since it aims to explain doing capacity, the Hard Problem still remains as a potentially “insoluble” aspect of cognition.

    ReplyDelete
  47. The paper gave an effective summary of all of the key concepts brought forward in the course—likewise, tying them together. Starting with the profound impact of Turing and his TT, to Searle’s CRA, to why computation ≠ cognition, Descartes’ Cogito, to the Easy and Hard problems. I feel that my main takeaway from the paper is how hard, if even possible it is to solve the hard problem. The hard problem, primarily concerned with feeling, essentially boils down to figuring out what It means to be human—an insurmountable task in my opinion.

    ReplyDelete
  48. This reading had me clear on something. The connection between the CRA and SGP elucidates a fundamental challenge in cogsci and AI. How to imbue computational systems with genuine understanding (and maybe one day feeling abilities) rather than just the capacity to perform tasks or manipulate symbols? Can we only rely on sensorimotor dynamics and neural nets for that or is there something else out there?

    ReplyDelete
  49. Understanding Chinese and computing its symbols without understanding must be correlating to different brain regions. And the way that these regions work with the information could answer the easy problem. The hard problem is very frustrating. I don’t want to think it’s unsolvable but it might very well just be that. I think that this reading summarizes most of the topics discussed during the course. Great refresher.

    ReplyDelete
  50. [first paragraph is a brief overview of Turing's work; it can be skipped for brievety] Harnad summarizes some parts of Turing’s work that are particularly relevant to cognitive science, including the Turing Test, for which a machine attempts to act in a way indistinguishable from a human, and the physical version of the Church-Turing thesis, which posits the hypothesis that any physical system can be arbitrarily closely simulated by a Turing machine, and therefore by any modern computers, provided infinite time and memory.

    One point that I am divided about is the discussion relating to the difference between a plane and the simulation of a plane. This seems not to elucidate much: connectionists, like Dennett and myself, think that a simulated mind would feel, whereas Harnad, I believe, thinks they could not; it is clear that a simulated ice cube only melts if it is interpreted within the simulation as melting; no physical ice cube exists in the conventional sense of the term. In the same way, no brain exists physically, when such a brain is digitally simulated by a computer. The feelings, on the other hand, may well be occurring in the same way they do for a flesh-and-bone human. I don’t pretend to be able to explain why, but it seems like it would break a lot of my current models of reality, to suppose this could not happen. The disagreement appears to run deeper than this discussion implies.

    ReplyDelete
  51. As everyone has said in the previous comments, this reading is very helpful because it summarizes all the key concepts of the course and integrates them in a kid-sib way. It is interesting to see how so many of these terms boil down to feeling and how therefore the hard problem, explaining how and why we feel, is the biggest challenge of cognitive science.

    ReplyDelete
  52. This article summarizes the topics covered so far in this course. It introduces the Turing Test in its importance in cognitive science, explains how Searle’s CRA demonstrated that T2 and computationalism is not enough to solve the Hard Problem and how feelings must be involved, and finally shows that the ultimate question of Cognitive Science is the Hard Problem of how and why we can feel whatever we feel. I have a question regarding the design of the Turin Test. When describing the Turin Test, Professor Harnad says that we should communicate anything that we would communicate about in order to test whether a machine successfully pass TT. But there are infinite possible things that can be communicated and things that might not even have been talked about between humans. So how can we design the content of the communication for the Turin Test? A Turing machine may pass with some communications, but it might fail at some really complex and deep conversations. I wonder how we can know for sure that a Turing machine can pass all possible levels of conversations?

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2023 Time : 8:30 am to 11:30 am Place :  Arts W-120  Instructor : Stevan Harnad Office : Zoo...