Extra optional readings:
Harnad, S. (2011) Minds, Brains and Turing. Consciousness Online 3.
Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroom. LSE Impact Blog 6/13 June 13 2014
Dennett, D. (unpublished) The fantasy of first-person science.
"I find it ironic that while Chalmers has made something of a mission of trying to convince scientists that they must abandon 3rd-person science for 1st-person science, when asked to recommend some avenues to explore, he falls back on the very work that I showcased in my account of how to study human consciousness empirically from the 3rd-person point of view. Moreover, it is telling that none of the work on consciousness that he has mentioned favorably addresses his so-called Hard Problem in any fashion; it is all concerned, quite appropriately, with what he insists on calling the easy problems. First-person science of consciousness is a discipline with no methods, no data, no results, no future, no promise. It will remain a fantasy."
Click here -->Dan Dennett's Video
Note: Use Safari or Firefox to view;
does not work on Chrome
Week 10 overview:
Week 10 overview:
see also How/Why The Hard Problem is Hard: http://andara.uqam.ca/Panopto/Content/Sessions/e77674a1-a902-40e0-a9ac-7a96185c4399/78025b8e-68cf-4256-b663-3ac51c8ed100-f13e20f4-93f1-4af9-9ba0-8d2c221be233.mp4
and also this (from week 10 of the very first year this course was given, 2011):
Reminder: The Turing Test Hierarchy of Reverse Engineering Candidates
t1: a candidate that can do something a human can do
T2: a reverse-engineered candidate that can do anything a human can do verbally, indistinguishably from a human, to a human, for a lifetime
T3: a reverse-engineered candidate that can do anything a human can do verbally as well as robotically, in the external world, indistinguishably from a human, to a human, for a lifetime
T4: a reverse-engineered candidate that can do anything a human can do verbally as well as robotically, in the external world, and also internally (i.e., neurologically), indistinguishably from a human, to a human, for a lifetime
T5: a real human
(The distinction between T4 and T5 is fuzzy because the boundary between synthetic and biological neural function is fuzzy.)
Dennett mentions Chalmers' zombie twin. Apparently "zombies have internal states with contents, which the zombie can report (sincerely, one presumes, believing them to be the truth); these internal states have contents, but not conscious contents, only pseudo-conscious contents." I can't grasp what pseudo-conscious means. How could something that is an internal state, that is content-full and that is believed to be true be any different from the same thing in a conscious being? I feel like Chalmers is just trying to pinpoint any difference between unconscious beings and conscious beings. And here it is this pseudo-aspect. Apparently it is a question of evidence: but how does one have evidence of one's consciousness- evidence that is different from just believing the thoughts in your head as Zombie Chalmers does?
ReplyDeleteEmma, please combine your commentaries rather than give several in a row. See reply below.
DeleteThis question of how you could prove there is a Zombie sounds like exactly like the hard problem. It can do everything we can, think everything we can but can it feel everything we can? How and why would we feel something while the Zombie wouldn't?
ReplyDeleteEmma,
Delete1. You were right to substitute feeling for the weasel-word "consciousness". It makes it much more obvious that "pseudo-feeling" means nothing. A feeling is either felt or it is not felt. Even if it's a faint feeling.
2. "Zombies" are metaphysical nonsense: T5-indistinguishable from us, but not feeling a thing.
3. What does it mean to have "internal states" with "contents" that you cannot feel? If it's a state you cannot feel, it's a state you cannot feel. A pot of water can be in the "internal" state of boiling, but it does not feel it is boiling. It can also be a robot that is in a state of low battery charge, and that detects it, and says, like Siri, "My battery is at 20%". But it does not feel that its battery is low. It doesn't feel a thing.
4. "Believe" is a weasel-word. When you are believing that it's Saturday, that FEELs like something. But if you are feeling nothing, you are not believing anything. A belief is a felt state, just as thinking, and wanting are felt states.
5. Unfelt states in feeling organisms, are just that: unfelt states. You may have a fever but not feel it.
6. An entity that has no felt states at all, ever, is a zombie, no matter what else it can do.
7. The Hard Problem (HP) is explaining how and why feeling organisms (such as us) feel.
8. Zombies are things, like boiling pots of water, that do not feel. Siri is a zombie; ChatGPT is a zombie; and a T3 or T4 robot would be a zombie too, if it did not feel.
9. But because of the OMP, we can't know whether a boiling pot of water or Siri or ChatGPT or a T3 or a T4 or even another human being, feels. The OMP is not the HP.
10. Now: why and how is the EP easy and the HP hard?
Hey Tina, I FEEL the same for your concerns, but this feeling is my interpretation of yours based on my feeling, which is not empirical because of the OMP. The HP is hard is because there is no way to test it empirically.
DeleteInternally, I cannot test my own feelings based on my own feelings. I will be trapped into the Introspection issue. This is subjective and I am just begging the question.
Externally, I cannot test my feelings by comparing my feelings to others, because of th OMP. Therefore, there is no place to start to address the HP because we do not have any method to fill the gap. My hope is that by solving the EP, some empirical ways might appear and maybe more hints will be discovered and be helpful to deal with the HP.
This reminds me of of the 4b reading by Fodor, in acknowledging the limitations of physical phenomena. Whereas Fodor was discussing it in the context of the easy problem, this thread critiques Dennett’s paper in the context of the hard problem. I think Professor Harnad gets at the hard problem from a functional point of view, specifically in questioning what can we DO that we could not DO without the associated feelings. I wonder if feelings did in fact have a function, whether a T3 zombie could technically even exist, in that is it really doing everything we can do without the capacity to feel.
DeleteJocelyn, if you know how and why T3 (or T4) could not do " everything we can do without the capacity to feel", you have solved the HP. Otherwise T3 (or T4) is the closest you can get (because of the OMP).
DeleteHi Professor, I read through your comments on this thread and was slightly confused about how we know that human beings can do what they do without feelings, or that feelings are simply correlated with feelings rather than casually related to them. Searle’s Chinese Room argument demonstrated that speaking Chinese, or any language for that matter, could be done through a series of computations according to dictated rules, completely devoid of understanding. However, while the Searle experiment works in a hypothetical capacity, it seems doubtful that a person could really learn every rule of an entire language and its infinite combinations of words to the degree that they could speak it completely indistinguishably from a person who actually understands the language. What if someone says a combination of words that has never been said before? Even if someone was somehow able to memorize every single word in a language, all of the common combinations of these words, and the prompted ways to respond (which is doubtful because that is a lot of information), there would inevitably still arise combinations which they were unable to account for. I believe that our capacity to understand and feel effectively makes this process a much simpler one. Thus, couldn’t it be said that feelings allow human beings to “do” language faster and more efficiently? And couldn’t then the same be said for other important human functions, like learning and memory and collaboration. It’s hard for me to imagine that having feelings are not fundamentally beneficial to our performance of these tasks.
DeleteYou present a good point, Zoe, that Searle’s experiment seems more feasible in a hypothetical capacity. However, my understanding is that because of the OMP we cannot assume other people have feelings. It is theoretically possible for a zombie to pass the Chinese Room experiment with enough learning and time. In theory, we have no evidence that zombies don’t exist or that they wouldn’t be able to pass the TT. If we manage to answer the HP and understand if feeling is necessary to pass the TT, that would tell us for sure that there are no zombies and that the Chinese room experiment could not be passed by one.
DeleteHello Professor, after reading your comment on this thread and reading your question of, "What is the causal role of those correlated feelings?" and "What can we DO that we could not DO without them?" I was thinking about them for a while and came to an answer (I am not completely sure if it is correct). Regarding the first question, from my understanding, the causal role of these correlated feelings (ex. smiles) is that they are indirect evidence to the fact that we are at a certain state / feeling something. For example, there is no direct test to quantify or measure things like 'happiness'. So these indicators of happiness, like smiling, laughter, etc, are what shows that we are feeling happy. Therefore, without them we would not have an indirect evidence to show proof of things like 'happiness'.
DeleteBut then while I was thinking of this answer, what got me confused was that, such indicators can be faked even by a T3 or T4 robot. Siri for example (which is also considered a zombie), could perform fake indicators to show as if they feel 'happy'. Then considering this, do correlated feelings really serve a purpose (or are they significant) in being a proof for our feelings?
I'm having a bit of a hard time following Dennett's argument regarding the Hard Problem. The Hard Problem has to deal with the how and why we feel. However, Dennett believes the Hard Problem is misguided due to the assumption there are fundamental aspects of consciousness that cannot be explained by physical processes. But he says that feelings are emergent properties of the brain. But wouldn't this be a form of oversimplifying the hard problem and making it to an easy problem? Wouldn't it no longer be subjective if we could explain feelings through physical processes?
ReplyDelete
DeleteI will attempt to respond to your question.
I believe that Dennett's approach is to reframe the problem, suggesting that subjective experiences, or "feelings," are emergent properties of brain processes. This doesn't necessarily oversimplify the Hard Problem; rather, it offers a different way to understand consciousness. By interpreting feelings as outcomes of multifaceted biological processes, Dennett implies that they arise from complex interactions of physical processes in the brain, which are, in principle, explainable by science. It's not about reducing feelings to mere physical interactions, but understanding how these interactions can lead to the complex and varied nature of subjective experiences.
I don’t think Dennett's view negates the subjectivity of experiences. I believe it actually proposes that this subjectivity, while real and vivid to us, emerges from objective, physical phenomena - understanding the physical basis of feelings doesn't diminish their subjective nature; it just provides a framework for understanding how such subjectivity arises.
Marie-Elise, cogsci is not doing ontology. It is not about what kind of "stuff" feeling is. It's about reverse-engineering how and why organisms can DO what they can do (EP) -- and also about reverse-engineering how and why (sentient) organisms FEEL (HP) (as well as whether and what they feel [OMP]).
DeleteAll of that is of course "physical": Cogsci is not doing physics either.
Amélie, "emergence" is one of the weaseliest of weasel-words. It explains nothing. It is just a synonym for "It just happens: I have no idea how or why!" "Interpretation" is not explanation, it's hermeneutics. ("Complexity" is "emergence's" weasel-cousin. "Subjectivity" is yet another! Stick to "feeling" -- or, if feeling Latinate or Romance, use "sentience.")
Dennett presents the intuition that many of us have that we can feel but Turing’s robot cannot. According to Dennett, we should discredit this intuition, as our intuitions in science have misled us in the past. Furthermore, according to Dennett’s heterophenomenology, the only important aspects of cognitive science are things that are measurable, which can include beliefs, emotions, fears, brain activity and anything else detectable through observation, in addition to our external doings. While Dennett may be correct in saying that our belief that we can feel while another robot cannot is no more than an intuition--since we cannot know whether another being can feel (OMP)--he is discrediting an important fact that we certainly know that we feel. This fact gives rise to the Hard Problem (how & why do we feel?), which Dennett fails to address in this piece.
ReplyDeleteHi Jessica,
DeleteI agree with you on how Dennett fails to address the Hard Problem. From my understanding, Dennett proposes the usefulness of heterophenomenology, a way of studying beliefs or subjective experiences, restricting himself to what he calls “3rd person science”. However, when it comes to addressing the Hard Problem of consciousness (how and why organisms feel, rather than just do), heterophenomenology is not capable of solving it, and actually seems to deny it altogether. Indeed, Dennett believes that feelings are mere beliefs, and knowing about the correlates of feelings do not tell us much in regard to how and why feeling organisms feel.
Hi Jessica & Melika! My idea is a little different. I think heterophenomenology has the capacity to solve HP. HP is so far undefinable, except having a vague thesis (how and why we feel) to distinguish it from EP what we think we could solve first.
DeleteHowever, the problem really is how to do such 3rd-person science: other-mind problem is the first challenge in front of us. Additionally, 1st-person science (science in common-sense I think) is still questionable and debated the field of philosophy of sci that our theory and knowledge is highly situated by our own personal experience, cultural background and so on. For these reasons, Dennett's heterophenomenology is quite abstract as with HP. Hope I don't understand the reading, and please let me know if there is anything I misunderstood.
I find this comment chain interesting because it highlights the difference between the A and B camps(to my understanding Melika and Jessica belong to B and Evelyn to A). I am still trying to figure out where I fall, but I think my heart is in camp B. I appreciate that heterophenomenology, as highlighted, is not just verbal reports but involves physical sensations and reactions, and can observe phenomenon like deja vu and blindsight(which are admittedly cool phenomenon). I don't agree that it covers everything to do with "consciousness"(feeling and what goes on in our heads), and I do think that this again comes down to feeling. To me, feeling is different from the other mechanisms of the brain and nervous system that can be easily broken down. If my cheeks feel hot, it may be a physical reaction to being embarrassed, but it is different from feeling embarrassed, which is also different from the thought "well that was embarrassing". Just because we have some unfelt states doesn't liken us to zombies, because importantly, we do also have felt states, like "beliefs" that cannot be observed.
DeleteJessica, you're right about HP, but DD is wrong about lumping together "beliefs, emotions, fears, brain activity" as "measurables". You can measure behaviors (and test behavioral capacities) and you can measure brain activities that are correlatedwith beliefs, emotions, fears, but you can't measure the feeling of emotion or fear" (see reply to Tina & Eugene).
Delete"Belief" is a weasel word, because it too is a feeling: It feels like something to be believing that today is Tuesday and that the cat is on the mat. It feels different to be believing that today is Wednesday and the mat is on the cat. A belief is like a toothache when you are feeling it, but when you are not feeling it, it's just an unfelt state that can be activated if you are asked a question, but that is as unfelt as last week's toothache, when you are neither feeling nor recalling it.
So you can measure the brain activities correlated with believing something (when you are believing it), just as you can measure the brain (and dental) activities correlated with feeling the pain os a toothache when you are feeling a toothache, but you are measuring the observable correlate, not the feeling.
Melika, this is why "belief" is a weasel-word.
Evelyn, the 1st-person/3rd-person distinction (fine in grammar, and fine in distinguishing what I feel from what everyone can observe) is bogus when it comes to science.
Josie, A-team/B-team does not cover all the "nuances" (as ChatGPT's trainer likes to train GPT to call all unresolved "complexities"). "Heterophenomenology", mental "weather-forecasting" is also T3/T4 reverse-engineering. The philosophical bickering about whether and what feelings "are" is ontology, not cogsci.
In saying that we can measure brain activities that are correlated with feelings, but not the feelings themselves, are we indirectly attributing the cause of these feelings to such brain activities? This still doesn’t address the why aspect of the har problem, but I wonder if this is (very faintly) in the direction of addressing the how.
DeleteJocelyn, we assume the brain causes both doings and feelings, but with feelings we cannot explain how or why.
DeleteAlong this vein of thought, I found the idea of heterophenomenology interesting. If belief is a feeling, heterophenomenology requires data (such as someone passing the TT) and beliefs about experiences (feeling). This would prevent the OMP from being an issue as having a mind and feelings (assuming a mind is needed for feelings) becomes essential. This method would easily differentiate zombies from non-zombies, potentially helping to solve the HP .
DeleteDennet's heterophenomenology seems to think that Turing's approach can address the EP AND the HP. He explains that once cog sci figures out the connections between feeling and brain activity, for example, then we'll be able to explain both cognition AND feeling. That being said, the HP can't be solved by reverse engineering cognition. Like many others mentioned, building a T3 robot could help us build something that can feel but we won't foreseeably be able to know whether it is actually able to feel. We assume those around us aren't zombies and we might assume a T3 robot also isn't a zombie. However, this assumption is not based on us knowing the causal mechanisms of the robot. So even once you successfully reverse-engineer all the measurable and observable properties of sentient organisms, the HP still remains unexplained.
DeleteNicole, DD does not say belief is a feeling; he says feeling is just a belief that there is feeling.
DeleteThink a little more about that: "Pain is not a feeling, that really hurts; it is just a belief that pain is a feeling that hurts.
Try telling that to a crying child, "Your stomach-ache is just a belief, like believing that there is goblin under your bed."
If this were true, then all talk about pain would be like talk about flying-saucers. HTRPHENO does not provide this explanation; all it does is deny the difference between really feeling something and just believing you are feeling something,
You misunderstood the point that if cogsci could successfully explain "how and why T3 (or T4) could not do everything we can do without having the capacity to feel" [i.e., explain how and why there could not be a T3/T4 zombie], then cogsci would have solved the HP.
In other words, successfully explaining how and why there could not be T3/T4 zombies would be the same thing as successfully explaining how and why we feel (HP).
HTRPHENO does not provide this explanation. It simply denies the difference between really feeling something and merely believing you are feeling something.
Please make sure you understand this clearly, Nicole.
Miriam, I think you understood this. Solving HP is part of cogsci, but solving EP (i.e., successfully reverse-engineering T4 or T5) still does not solve HP. And denying that there are feelings does not help. So HTRPHENO does not help.
Hi everyone,
DeleteI come way later to write this comment. And I completely agree with what everyone has said about the Hard Problem. I would like to add something to all of your comments. What I found interestingly wrong in Dennet's proposition is not necessarily the claim that what matters is only what can be measured since he is allowed to think that it is unnecessary to torture ourselves with what can't be measured. However, I think he is rather wrong in limiting all measures of "consciousness" (sorry for this weasel word, I just don't know how to name it differently) to objective and subjective ones. And this is, to me, the entire complexity of the Hard Problem: it is that it seems to us that we haven't found yet a way to assess individuals' consciousness. For instance: coma patients, who can't report verbally their experiences and still present some cerebral activity, could be seen as conscious or not, as feeling or not, and we will never know, because even if they were able to report their experiences, we could not necessarily be 100% sure that this experience is truly felt, since we are not in the other person's body. And that's when we come back to the OMP. Maybe a first step to answer the HP would be to try finding a measure for the OMP.
(I know Professor Harnad will hate my comment because it is full of weasel words, very sorry about that !)
DeleteThis comment has been removed by the author.
DeleteI just realized that I made a very important mistake in how I described Dennet's proposition of heterophenomenology. Indeed, he doesn't principally focus on verbal reports to assess consciousness but rather on the combination of objective reports (what he calls "3rd person data") and subjective verbal reports ("1st person data"). And in doing so, his method wants to create a comprehensive and external description of human consciousness. However, the comprehension of human consciousness is hardly done only from the outside but rather mostly from the inside, from how each will feel.
DeleteI see where you're coming from Jessica, but what you're saying still relies on introspection if I'm not misunderstanding it. I completely agree with Juliette's argument about coma patients. I think that Dennet's proposed idea is a more tedious one to solve the hard problem but I also think that it would yield more reliable results in the long run.
DeleteHi Eugene,
ReplyDeleteMy understanding of the T3 robot is the same as yours - that it is indistinguishable for life in any aspect from an actual human. Given this, then, isn’t the question of “what robotic behaviors humans have” irrelevant? I feel like it’s not that the human has robotic behaviors, but rather “that the robot has human behaviors” - inthe case of T3, every human behavior would be an example of a human behavior that the robot has (since, by definition, the T3 robot is indistinguishable from a human.)
I’m less sure about whether the zombie would be more akin to a T4 or T5 candidate, but am thinking that it might help to start with definitions. T4 means that it is also indistinguishable in makeup from a human, and T5 means that it’s physically indistinguishable (not exactly sure what this means.) Chalmer’s Zombie is that it is functionally and physically identical to Chalmers, can report internal states, but has NO conscious experience.
Going off these definitions, wouldn’t the question of whether the zombie would be T2, T3, T4, or T5 depend on what we take to be the necessary level for conscious experience (which goes back to the big question of Turing tests in general?) For example, if we take it that total indistinguishability from a human in both behavior, sensorimotor capacity, and constitution to be enough for conscious experience, the Zombie wouldn’t be a T4. However, if we said that all this was still not enough for conscious experience, I don’t think we could say that it would be wrong to classify the Zombie as a T4.
On the page numbered 461, Dennet argues that herterophenemology allows for scientists to predict phenomenon like motion capture or change blindness. Tom Nagel criticized 3rd- person science for not being able to explain correlations in data and Dennet retorts with an example of chemists being able to predict molar properties before creating unknown polymers. I don’t think that this example really proves what Dennet intends, as to me it further points out how unknown the “how and why” are behind feelings. In the case of the chemists, they know exactly that electronegativity has a correlation with molar weight because they have tested it on a multitude of naturally occurring elements. For heterophenemologists, they can predict how someone will report their experience (seeing or not seeing motion) based on what they know about brain activity, but they still will not know how this report of motion relates to feeling. Dennet’s argument on page 461 does not address the HP, making it difficult to say that heterophenemology makes advances in cogsci.
ReplyDeleteTo add to this, a chemist does not need to know exactly WHY electronegativity correlates with molar weight if it is the same every time. If their goal is to reverse engineer the creation of a polymer, the fact that this rule is always true is enough. For cognitive science, simply observing the output of someone's feelings is not enough to reverse-engineer it, the HOW and WHY feelings relate to brain activity or behaviour is the important part. Feelings are ignored in heterophenomenology in favour of third-person science to try and make the study of "consciousness" more objective, but this erases exactly what we are trying to solve with the hard problem.
DeleteMegan, yes, Dennett is brilliant, insightful, and creative, but he begs the question on this one. His lifelong intuitions derive from his "critical period" of doctoral studies with Gilbert Ryle (a philosophical behaviorist) at Oxford, and the very fruitful hunch that "mental states" can be reduced to and explained by non-mental states. Yes, they can be. But "mental" is just a weasel-word for felt; so the fact and causal function OF FEELING ITSELF (the HP) is not explained that way.
DeleteAdrienne, yes, HTPHENO is Turing's method plus some mental meteorology which is hermeneutical rather than explanatory,
Hi Eugene & Ohrie! I think the reason to talk about HP a bit before solving EP is to determine where the boundary of EP is: which condition is covered by EP, and which is not.
ReplyDeleteI think a "Zombie" could be any machine which is unable to pass T5, which the vague criteria about "feeling" and "consciousness" have entered the game, I think. I was used to be struggled by the problem whether there is any possibility even a real human could not pass T5. However, the solution is easy: TT is life-long. It means that even a initially "zombie'-like human being still could pass T5 as long as he/she put efforts on making up for the deficiency for the rest of live - only definitive conclusion on his/her coffin.
In his text, Dennett argues promotes the use of heterophenomenology, a 3rd-person scientific method that is applied to human and animal “consciousness”. From my understanding, 3rd-person scientific methods include the observation of subject from a third-party, which records data such as behavioral, visceral, hormonal, verbal reactions etc… However, it also includes a subjects’ introspective beliefs. In the case of heterophenomenology, changes in external and reported internal states would be studied in the context of changes in “consciousness” — or to use a non-weasel word, changes in the experience of feelings.
ReplyDeleteI agree that this is how Dennett argues that heterophenomenology helps to address the hard problem. However, I don’t see how this is really aiming to solve the hard problem at all as it doesn’t address the basis of how and why we are able to feel. Additionally, the concept of a 3rd-person scientific method doesn’t make sense to me in this context. Why would we be relying on the external recording of possible physical reactions of feeling in order to know the subjective experience of an individual. It again isn’t able to explain how and why we feel, only that there might be some feeling going on. But even that seems like a stretch because it seems possible for something to feel and not display any outward signs of this feeling.
DeleteFrom my understanding, although heterophenomenology does attempt to explain how and why we feel, I don't think it really explains the hard problem.
DeleteSince heterophenomenology takes the 3rd person scientific approach to study our consciousness, where it considers both the introspective reports of subjects and also takes evidence from all other information available (ex. brain activities), it may answer questions like whether or not we feel, by showing correlates, but it cannot answer the hard problem of how and why we feel.
David Dennett also mentions Chalmer's statement, "That’s to say, no purely third‑person description of brain processes and behavior will express precisely the data we want to explain, though they may play a central role in the explanation". I think this explains that the third person information of brain processes can show correlates, but may have difficulty in being a precise proof.
David Dennett also clarifies the definition of heterophenomenology as, "Ditto for heterophenomenology: get the lore, as neutrally and sympathetically as possible". From reading this quote, I think that heterophenomenology can be a great method to get the information we want as objectively as possible, but does not provide direct answers to the questions of "how" and "why" we feel.
Dennett appears to respond to Chalmers' and others' protests that the heterophenomenological approach to solving the hard problem leaves out felt experience (because a model that explains heterophenomenological data wouldn't have to feel, it would just have to act like it did) by saying that the “Zombic Hunch” is false. That is, although we believe ourselves to be distinguished from a T5 robot by our felt experiences, this is in fact an illusion. But this implies that he either believes that zombies can feel - which contradicts the definition of a zombie - or that humans cannot feel, in which case what does he think that his human subjects are reporting on when he is collecting data for his heterophenomenological approach? Moreover, Dennett’s refusal to consider explaining subjective experiences means that he doesn’t even believe that the hard problem exists. The heterophenomenological approach is then useless because it’s exactly what cognitive scientists have been doing all along to try and reverse engineer a solution to the easy problem of consciousness. Explanations of people’s verbal reports on their internal states would be covered by a T3 robot, and the physical processes accompanying these felt states would be covered by a T4 robot. Therefore heterophenomenology bringing nothing new to the field.
ReplyDeleteAya, spot on.
DeleteI’m not sure if it is a matter of perception or not, but I think the author of this text is misinterpreting the concept of Zombie Hunch as presented by David Chalmers. The author argues that the Zombie version of Chalmers, that is designed to basically be able to produce the same (mechanical) responses both internal and external as the real David Chalmers but who lacks consciousness, is actually conscious and does not think that he is a Zombie, just like “the real” David Chalmers knows that he is not a Zombie. The author argues that he does not see how Chalmers can use the argument of experience to say that the Zombie’s beliefs of being conscious are false. However, by saying this I think the author is missing the point of the importance of the role of experience in constituting the content of our beliefs. However, I am a little confused by this because, as I see it, the Zombie version of Chalmers could be considered to be a T3 robot but without the capacity to ground the words and symbols it encounters to its referents, so then, is this Zombie version of Chalmers like a T3 in terms of external and internal structure, but more like a T2 in terms of internal grounding capacities?
ReplyDeleteI was thinking about the same thing. The Zombie version of Chalmers can be seen as a T3 robot in terms of external and internal structure. It replicates all the functions and behaviors of a conscious being, making it functionally identical. However, the Zombie lacks what Chalmers would argue is a crucial aspect of consciousness—the internal grounding of experience. It can process information, respond to stimuli, and even report internal states, but it supposedly lacks the actual subjective, phenomenal experience. So, while the Zombie may resemble a T3 in terms of external and internal structure, it deviates in lacking the essential internal grounding that, according to Chalmers, constitutes true consciousness. So how should we classify the Zombie?
DeleteHi guys, from what I understand, Chalmers is arguing that even if we were able to reverse-engineer a perfect T5 robot that is identical to a human in terms of internal, external, and neural structure (i.e. the robot can DO everything a human can DO), we would STILL be stuck at the Hard problem. This T5 robot would be zombie because it does not FEEL what it is like to exist. Heterophenomenology argues that subjective experience doesn’t matter (both the zombie and Chalmers have the same raw data since they are exactly the same, so therefore the zombie is a perfectly reverse-engineered human). But we know that’s not true – if FEELS like something to have experiences. So, heterophenomenology and Dennet’s Zombie is missing subjective felt experience.
DeleteValentina, I think Chalmers's "zombie" is a T5 zombie. And I think it's as improbable (and as little worth talking about) as apples falling up instead of down. But a more general notion of "zombie" is anything that is insentient (i.e. does not feel anything). Rocks are zombies; so are rockets and rhododendrons (plants). The OMP for other species is about whether they feel. (It's an interesting question about a talking T3 zombie what its referent for its content-word "feel" would be.)
DeleteBut DD thinks the question is not about whether they zombies but about whether they believe they feel. But the trouble is that believing is a feeling...
Julide, the question is not about whether a T3 zombie is grounded (it is), but about whether it feels. (There is a puzzle about what a zombie could mean by "feel": there is a potential answer too: what is it?)
Kristi, a T5 zombie is nonsense. But a solution to the HP would have to explain why it is impossible.
Going to what professor Harnad said about a T5 Zombie being impossible, I understand that to be because T5 is exactly physically identical to us in every way. Since we feel, logically we would feel as well. Explaining why a T5 Zombie is impossible is an answer to the HP because it explains what it is about us that causally makes it that we are not Zombies.
DeleteSpot on, I believe, Omar. The possibility of a T5 zombie is definitely absurd. I think it infringes upon metaphysics - completely outside the scope of cognitive science the reverse engineering of human cognitive capacity. If Chalmer’s “Zombie Hunch” really means a T5 zombie, then what is he really suggesting here? Is he essentially putting forth dualism and suggesting that feeling isn’t generated by our physiology?
DeleteI see. What I said was related to grounding not feeling. Due to the Other Minds Problem, it remains uncertain whether things like a boiling pot of water, ChatGPT, a T3 or another human being feel. That’s what I should be focusing on. It's important to note that the OMP is distinct from the HP of consciousness.
DeleteIn this paper, Dennett argues the utility of heteromorphology to study some of the facets of consciousness by way of converting them to a 3rd person perspective. He seems to make the case that this approach can bypass/remove the roadblocks involved with the other minds problem, as it turns accounts of subjective experience into a more quantifiable, “neutral” data.
ReplyDeleteThe issue that Dennett seems to fail to address is that this approach is only measuring an abstraction (which is the doing/saying/reporting) of the feeling that is still gated by the hard problem. In other words, heteromorphology seems only to measure the “doing” capacities correlated with feelings and doesn’t necessarily provide any causality to the feelings themselves. Measuring blushing, collecting somatosensory data and other physiological metrics may appear to strengthen (or objectify) the heteromorphological approach, but I would argue it is just adding a more complex swath of data to what is still just emergent (or secondary) to the feeling of what it is like to experience them, or their experiential precursors. So, at best, this approach may offer insight into the behaviors associated with feelings, but it seems entirely miss the mark on unveiling the hard problem itself.
Gabe, I agree, Dennett’s heterophenomenology misses the mark on the hard problem as it doesn’t provide any causality – he provides no explanation for how or why we feel, only a method for collecting data about the behaviors correlated with feelings. He seems to find issue with Chalmers’ view that we must explain subjective experiences, opting instead, as he states in a response to Chalmers, for the view that it suffices to study data on subjects’ beliefs about their subjective experiences. I fail to see the logic in this. A belief is just a feeling, and is not measurable in the way that Dennett seems to suggest. Verbal expressions of beliefs, brain activity at the time of a feeling of belief, and other somatosensory data, while helpful, do not explain the feeling of belief itself. Dennett states that subjective experience is not useful data, but that a subject’s beliefs about their experiences are useful pieces of data – but a belief is a subjective experience, so something isn’t adding up.
DeleteGabe, I think you may get the point but it's hard to say because of all the un-kidsibly words you are using. Please see the other replies for 10a and 10b.
DeleteAdam, that's about it (though believing something is just one out of an infinity of possible feelings). But, yes, there's a difference between a feeling and a belief about feeling; and believing is a feeling too.
Eugene, "robotically" means sensori-motorically. T4 includes biochemical processes, which are also physical and observable. (Human) T3 includes everything "cognitive" that humans can DO.
ReplyDeleteZombies are (or should be) not only human T5s that do not feel (if that is possible); to make sense, a zombie is any physical that does not feel, including stones, planet and atoms. So far, the only non-zombies are (some) biological species. Don't mix up the OMP and the HP. Both, however, are problems for cogsci, as well as the EP. (Yes, some think the solution to the HP will be attainable after EP is solved; some think not. You should reflect on the nature of the HP, and why it is a problem.)
Ohrie, strictly speaking, cogsci's mandate is only to reverse-engineering all cognitive capacities, but the boundary between the cognitive and vegetative is fuzzy enough so we should not haggle about that. If some vegetative capacities are needed to produce T3, that will become evident as reverse-engineering progresses.
(The T4/T5 distinction is between natural neural function and synthetic neural function, but it is not really relevant to this course. Chalmers's zombie is T5, but the same could be said of T4; and the best understanding of being a "zombie" is being insentient: having no states that it feels lke something to be in. Also, the right TT-level for sentience is not a matter of choice or definition. It is the one that can DO everything ("cognitive") that a normal human can do: that sounds like T3, along with as much of T4 or T5 as proves necessary to pass T3. Whether it is still insentient, hence a zombie, is not up to us, and blocked by the OMP. Turing just suggests that neither we nor cogsci should try to be holier than the pope, who, like evolution's Blind-Watchmaker, is no mind-reader either. The bottom line is what can be determined by empirical science, within the limitations of empirical underdetermination and human imagination.)
Evelyn, the OMP is beyond our choice of TT-level, and independent of it. We can't be certain about whether a rock or a toy robot IS insentient, just as we cannot be certain that a T3, T4 or T5 ISN'T insentient. Turing reminds that Turing-indistinguishabilty is the closest we can get. All scientific explanation is underdetermined, but the OMP makes cogsci even more underdetermined, because each of us knows (Cogito) that there really are felt states.
Garanca, you are right. Dave Chalmers just gave a name to the problem. About what matters: we'll try to cover that in Week 11.
ReplyDeleteI found the concept of Heterophenomenology to solve the hard problem of consciousness, and Dennett’s philosophy behind it to be quite thought provoking. I feel like this can be linked back to the Mary’s room thought experiment often mentioned in debates about consciousness and AI. Mary, confined in a black-and-white room, learns everything about color vision, all the neuroscience down to the molecular level. The question is, will she somehow ‘gain’ some understanding by getting out of the room and actually experiencing color. It seems like Dennett would be the proponent of the fact that Mary’s would not acquire any extra understanding, which at first sight seems to be incorrect.
ReplyDeleteHi Aimee, heterophenomenology is an interesting perspective but I don’t think it solves the hard problem. Unfortunately, if record all the raw data (ex. Verbal reports, brain activities, heart rate changes, hormone changes) associated with a specific experience, we are still only correlating this ‘neutral’ data with the feeling. We aren’t actually determining the causal role of those correlated feelings. I would argue that Mary can learn every single detail about colour perception, but until she ACTUALLY sees colour, she is stuck in the zombie paradox – she doesn’t know what it feels like to see colour. How would you explain the sheer excitement on children’s faces when they first use hearing aids or glasses? This is because it actually FEELS like something to have a subjective experience, something that cannot be captured with just words or measurable data.
DeleteAimée and Kristi, you're still using a lot of weasel-words. The colorblind-neuroscientist Mary puzzle assumes that Mary can see black and white but not color, and knows averything there is to know about color-vision physiology, and then gets her eyes fixed. Would she be surprised at how colors look? Not only is surprise a feeling, but so is B/W vision. If Mary were a real person, she would already have feeling. How much would she be surprised? Less than if she did not know the physiology, but it sure would not feel the way it felt to see B/W. If she were a T3 robot, she would have grounding, but if she were also a zombie, she would not have feeling at all. (One gets no more out of these silly puzzles than one puts into them.)
DeleteBut remember that grounding and feeling are not the same thing; grounded does not necessarily mean felt.
Hi Prof, indeed Symbol Grounding only answers the easy problem, not the hard problem. Even if Mary understood everything about colour vision, we still don’t know why it would (probably) feel like something to really see colour for the first time. If she were a T3 robot, she would have grounding, but we’d never know if she feels (i.e. is not a zombie) because of the Other-minds problem.
DeleteDennett uses this paper to argue that there are special merits to using third person explanations (rather than first) to understand the experiences of conscious creatures. He suggests that we should use heterophenomenology to do so, in which one can record all the variables about a person as they are speaking or describing some experience. Implicit in hetpheno is the idea that there is some meaningful component to answer the hard problem that is not captured by first person accounts, but is conveyed by physical behaviour or measurements of neural activity. Specifically, he suggests that there are limitations to our own metaconsciousness (awareness of one’s own consciousness) that undermine our ability to accurately answer the HP for ourselves, but that blind spots can be identified through measuring other variables. I think this is an interesting perspective, but one that I still struggle with, as the things we measure (whether it is accounts of emotion, already a shakily defined metric, neural activity, or body language) are still things we implicitly believe are important for understanding the HP. Dennett would argue that something about one’s body language can tell us about their experience and state of consciousness, but still argues that there are critical blind spots that make first person unreliable. My question is why these blind spots don’t extend to the third person. From my view, if there were a ‘sign’ of consciousness that is not detectable in oneself, I would be skeptical that we’d be able to detect it in others. Because of this and the other reasons mentioned in this thread, I found this article interesting but ultimately unsatisfactory in getting any closer to an answer to the HP.
ReplyDeleteMadelaine, all these W-Ws! "experiences" "consciousness" "awareness" "1st/3rd person" "metaconsciousness" "awareness of consciousness" -- rewrite all of them using f/f/f and see that's left that still makes sense (especially the "meta" stuff: feeling vs felt feeling...
DeleteThroughout Dennett’s presentation of heterophenomenology, the methodology he proposes in order to base the study of consciousness in third-person science, I was struck by how incomplete and unsatisfying I found this proposed method. Dennett argues that this methodology provides a “total, dictatorial authority over the account of how it seems to you, about what it is like to be you” through a neutral interpretation of an individual’s subjective experience (461). However, I struggle to comprehend how this method solves the two failures of overlap Dennet briefly mentions (false positive and false negative), and further how the descriptions of one’s experiences can be faithfully, and completely, delineated by this method. Chalmers offers a compelling critique, the Zombie Hunch, which Dennett responds to by saying that both Chalmers and his Zombie twin would have the same ‘hetermophenomonelogical worlds,’ regardless of the fact that the Zombie is lacking what we are interested in studying—consciousness. How then can heterophenomenology provide a comprehensive depiction of “what it is like to be you” when it does not distinguish between a conscious and feeling thinker and a zombie?
ReplyDeleteHi, I agree with what you said about Dennett's heterophenomenological methodology. While Dennett argues that it grants comprehensive authority over subjective experience, I would also question its ability to address failures of overlap and provide a faithful and complete depiction of individuals' experiences. The challenge becomes apparent in Chalmers' Zombie Hunch, which questions how heterophenomenology distinguishes between conscious beings and zombies. Dennett's response, emphasizing observable behaviors and linguistic reports, may fall short in capturing the internal, subjective aspect of consciousness. This then taps into the debate about whether third-person methodologies can fully capture the richness of subjective experience. Dennett should address concerns that his approach, focusing on external aspects, can truly offer a comprehensive understanding of "what it is like to be you," especially in light of challenges like the Zombie Hunch.
DeleteShona and Selin, hard to answer, but my guess is that, since Dan thinks that feelings are just beliefs about feelings, and since beliefs can be verbalized, any differences about feelings can be picked up from verbal reports, and zombies are just people who don't believe they feel. But since I don't believe that feelings are just beliefs about feelings (nor that that belief even makes sense), none of this makes any sense to me. (BTW, I don't believe that there are human zombies, except maybe people when they are in delta sleep or in a chronic vegetative state, nor that most normal mammals, birds, reptiles, fish or most invertebrates are zombies; but I do believe that plants, fungi, and sea-shells are.)
DeleteDoes Dennett then argue that the verbal reports of the feelings (their beliefs about feelings) of Chalmers and the zombie twin would be identical? I am confused as to how Dennett argues against Chalmers' zombie hunch by saying that the zombie himself also "fervently believes he himself is not a zombie"-- how would the zombie be able to have these beliefs about his feelings without consciousness (464)? I think that this zombie could have feelings, but I struggle to understand how the zombie could have beliefs about these feelings.
DeleteSo Dan thinks that feelings are just what we believe we feel, and Professor thinks that feelings exist and we can have beliefs about them but there's more to it (?). Shona, what I understood was that zombies don't believe or don't "know" that they can feel, so I'd say that they are not conscious and if they're not conscious, how would they know whether or not they are zombies, or any other thing? If we're saying that belief is thought in this case, then would they not be practically unconscious?
DeleteJoann, one can only agree with Dan that a T5 zombie is about as improbable as a tachyon or any other violation of well-established natural laws. But I think this is a matter of probability, not logic -- until and unless someone proves that it is logically impossible. Yet a proof that zombies are logically impossible (hence that feeling is logically necessary) would also be a big step toward the solution of the HP. Alas, there is no such proof in sight -- about zombies or about any other provisional empirical regularity. So the HP remains unsolved, both empirically and logically. The brain does produce feelings, just as it produces doings, except we have no idea what it produces feelings for, and we have no idea how brains do it (but they do).
ReplyDeleteIt's the easiest thing to say "feelings may have been an evolutionarily convenient way to allow living organisms to survive by enabling them to do what they can do", and it may even be true, but the HP is to explain how and why it's true.
Antonio Damasio would no doubt agree too. It may even be true that "feelings might be just the product of the brain's ongoing efforts to monitor and adjust the body's internal state (homeostasis)". But to solve the HP we need a causal explanation of how and why that monitoring has to be FELT rather than just DONE.
Dennett’s heterophenomenology shows a different approach to cognition and to how it can be studied in a scientific manner. In fact, the name of his method is quite self explanatory, and suggests that rather than relying solely on an individual’s personal experience of what cognition might be or what it might enable him to do, we should also incorporate any “internal conditions detectable by objective means”, which he calls the third person datas.
ReplyDeleteIf we follow his perspective, the heterophenomenology is suppose to explain the errors that we make without necessarily being aware of it (before reading this article i would have used the word “unconscious” to refer to those kind of mistakes but since our brain activity is involved, and integrated in the 3rd person datas, i don’t think that it would be relevant to exclude consciousness from those wrong interpretations), based on our introspective and sensory systems. So once again, it proves that sensorimotor interaction with the world is an essential condition (but not sufficient one) to explain what is going on in our head when we think.
Adrien, kid-sib didn't fuly undetsand all that: Could you perhaps de-weasel it so K-S can understand it better? Maybe state DD's heterophenomenology and explain whether and where it diverges from Turing's method, and Turing's statement of its limitations.
DeleteHeterophenomenology derives from the 2 greek words “hetero” which means the other, and “phenomenon” meaning appearing to our senses. Thus, this new approach takes into consideration what Dennett calls the third person datas, which are any observations that we can make over someone’s behaviour or physical/mental state. I think that one main difference between Turing’s method and heterophenomenology is the fact that in the imitation game, the observer only relies on his own experience of the conversation with the machine, whereas with Dennett’s methodology we can rely on datas coming from the other’s internal state.
DeleteIn a sense, Turing’s method doesn’t take feelings into account as there is no sensorimotor integration throughout the TT and the only way to communicate is through written language. On the other hand, heterophenomenology is able to integrate sensory information and thus, take feelings into account.
I'm with Chalmers on his hunch that there really is such a thing as the HP, and that "no description of brain processes and behavior will express precisely the data we want to explain", ie. what it is like to be someone. And like a lot of other comments, I don't think HETPHENO is particularly productive at getting rid of the HP. But I also don't buy Chalmer's zombie thought experiment either--that you could have a zombie that's molecularly identical to you, but that still does not feel like something to be that zombie (464). I feel like these two positions i hold might be contradictory, but I'm not sure. For clarity, they are 1) that no amount of data alone could reproduce what it is like to be someone, and 2) that a "zombie" that's molecularly identical to you wouldn't actually be a zombie, but would have something that it is like to be like.
ReplyDeleteHi Elliot! I agree with the first part of your post – siding with Chalmers and rejecting the utility of Dennett’s heterophenomenology in addressing the Hard Problem – however I disagree with your final point. I think this all comes down to whether you believe consciousness only comes from the biological circuitry humans are all equipped with. Before our discussions of T4+ robots I would’ve agreed with you that a molecularly equivalent zombie would have some form of feeling, however I think this is just a different form of the argument we saw back when we studied mirror neurons, where we discussed whether or not reproducing the human brain would be enough to generate consciousness. This is similar to your point 1) that “no amount of data alone could reproduce what it is like to be someone” which I agree feels contradictory to your point 2). I would side with the argument that we can’t generate consciousness through mere reproduction, but then again, this may just be my natural “feeling” that humans must have an innate factor that a zombie-clone would lack. (Sorry for all the weasel words).
DeleteElliot I think I may disagree with your first position and if you'll allow me to explain it might solve the seeming contradiction in your two views.
Delete(I'm having trouble expressing the following vies succinctly so sorry if it's hard to follow)
I think that there is an amount of data sufficient to reproduce what it feels like to be someone. Imagine you were designing a copy of a feeling being and you had identified all the decisions you needed to make along the way to an end product: they need these neural circuits here, they need to have x felt state when y happens, etc. Each of these decisions represent a data point, a bit of information about the program/machine that influence the way it behaves (feeling included as a behaviour). If you had to implement this replica feeler computationally, you'd have a hell of a job because each of those data points needs to be turned into binary and in it wouldn't be feasible for logistical/technological reasons. The amount of data you would need would be to have a representation in the computer for ever proton, electron, neutron, etc. including their position, spin and any number of other physical properties. Each of those would need to be represented symbolically which would be impossible. Unless you were just to create a molecule for molecule replica of the feeler you were trying to replicate anyway. Each data point is accounted for and physically implemented at 1:1 correspondence. Thus your second position.
In essence I'd argue that you need exactly as much data as is held in the molecule for molecule copy of the person you're trying to replicate. I hope this wasn't total gibberish but please let me know if it was.
Thanks for the responses, Lillian and Stephen.
DeleteStephen, I think you're clearing it up for me with your argument that you'd need as much data as is held in the molecule for molecule copy of the person to create a replica... maybe you could produce a "zombie" that it feels like to be something with less data, but at the very least, I'm now more convinced that there is an amount of data that could express what it's like to be someone. But what I now also realize is that this doesn't mean we have to reject Chalmer's suggestion that "no description of brain processes and behavior will express precisely the data we want to explain" (what it is like to be someone). Perhaps it can be true that what it feels like to feel is reducible to biological data AND that that biological data is not the feeling itself, in the same way that the information on my computer screen ultimately boils down to hardware and some ones and zeros, but I can still be seeing a lot more than just the ones and zeros. Maybe this is getting dangerously close to the E-word (emergence!), but this is how it makes the most sense to me right now.
Sky1: From my understanding, Dennett is saying "feeling" is just a belief, but he misses how it's circular. If we can believe in feelings, it means feelings exist (the feeling of believing). That's the Hard Problem. Dennett dodges it by treating it as a belief, using regular methods like heterophenomenology. But he skips the real issue—finding the cause of feelings. The article's a little hard to follow, maybe because Dennett uses many different (sometimes weasel) words for "feeling," making it a bit confusing.
ReplyDeleteSky2: I thought the article "Animal pain and human pleasure” was very interesting. My whole life I’ve been told that one cannot be healthy if their diet has no meat in it. Because of that, I’ve always believed that being an omnivore meant that we needed to eat BOTH meat and herbivorous elements (not either.) It is interesting to see all the misinformation present in this domain.
"The Fantasy of First-Person Science” brought back thoughts about the Hard Problem. Heterophenomenology, discussed in the context, doesn't crack the Hard Problem because it doesn't explain how and why we feel things. Imagine a "Zombie" - like me but without feelings and who lacks conscious experience. Dennett suggests that the belief in our ability to feel, as opposed to a robot, is based on intuition and may not have a strong rational or factual basis. While we cannot be sure whether or not other beings like plants or bacteria can feel, there is no denying that human beings can feel. The question we are trying to solve is how and why we can do that.
ReplyDeleteI find the appendix featuring the conversation with Goldman constructive in structuring the debate. Goldman argues that cognitive scientists give prima facie credence to subjects' reports, challenging Dennett's characterization of heterophenomenology as agnostic. Dennett, in response, defends heterophenomenology as a neutral and agnostic approach, emphasizing its role in gathering data for scientific investigation without committing to the truth of subjective reports. The discussion touches on topics like visual seeming, blindsight, and the nature of feeling states, with Dennett asserting the compatibility of heterophenomenology with the standard methodology of cognitive science.
ReplyDeleteMy question is, must the arguments from Team A and Team B be contradicting? Since none of the stances could fully explain the HP (regardless of whether OMP is involved), could we adopt both first-person experiences and third-person explanation (heterophenomenology) when dwelving into the how and why problem? I would consider it analogous to previous discussion: Cognition is not all computation, but it can involve computation.
Hi Kristie, to add to your question, I think that in the context of debating the state of feeling, the hard problem, the positions of Team A and Team B seem like they contradict each other. Team A emphasizes the first-person point of view–the subjective and qualitative aspects of feeling that seem challenging to capture through third-person, objective methods. On the other hand, team B emphasizes third-person explanations like heterophenomenology (I am still confused about what it is exactly), specifically observable, measurable, and objective aspects of feeling. But I think they can be complementary, like Dennett’s heterophenomenology attempts to bridge the gap between subjective experience and objective analysis. It uses first-person reports and subjects them to a third-person; using scientific analysis to identify patterns. But according to the previous skies, it doesn’t actually address the hard problem (DD reduces feelings to beliefs, which turns out are also feelings), so I am also curious as to how we can integrate both first and third-person points of view.
DeleteI’m not sure if I am correct in drawing this parallel, but throughout reading Dennett’s in-depth advocacy for heterophenomenology, I could not help but think, is this not just another way to frame behaviourism? Leaping over the zombie hunch would simply reinforce the idea of a “black box”, which is the fundamental characteristic of behaviourist schools of thought. I found it a bit hard to follow Dennett’s arguments on this basis, as it appeared to me that he was trying to introduce a different approach in studying cognition, but not really saying anything new, nor giving any substantial solution as it is endlessly homuncular in the way that behaviourism is.
ReplyDeleteDennett's attempt to 'solve' the hard problem (or rather give a path for solving it) doesn't quite hit the mark. He is basically making an argument that we can distill felt states into neurological data. I leave the question of whether that is true mostly open because while I disagree it is not the important omission here. The hard problem is not solved by saying if we create a T4 or T5 robot it will, to a satisfiable degree of certainty, feel. We need to explain why it would feel. What is it about feeling that is necessary if we can causally explain all functions without it?
ReplyDeleteTo supplement, I revisited Harnad's paper from 2011. Though I'm not sure if this is the right way to frame it, there seems to be a link to both perspectives: Harnad's emphasis on the limitations of causal explanations somewhat aligns with Dennett's perspective on heterophenomenology. Both suggest a form of agnosticism regarding the nature of subjective experiences (to what extent we can learn about what we cannot observe or experimentally manipulate), acknowledging the challenges in fully capturing or explaining them through causal models. At the same time, his argument that causal explanations lack room for feelings echoes Goldman's questioning of whether these mechanisms can capture and explain the subjective, phenomenal aspects of cognition.
ReplyDeleteFrom my understanding, Harnad and Dennet both highlight the difficulties in explaining subjective experiences (excuse the WW I don’t know what to substitute for it) using causal explanations. In heterophenomenology, Dennet is agnostic about fully grasping subjective experiences through observable causes (like you said). There is a consensus that the full picture of what is happening inside our heads might be a bit elusive when we only look at external, observable factors/causal models. But how can we do otherwise?
DeleteFrom this reading, I understand that heterophenomenology is denying that the hard problem exists. Dennett and Chalmer seem to be on opposite “teams” when it comes to deciphering Turing tests. Dennett appears to be on Team A which believes that to successfully address all questions without any lingering philosophical implications, Turing demonstrated how we could exchange the first-person viewpoint advocated by Descartes and Kant for the third-person perspective employed in the natural sciences. Chalmer believes that Team A is not taking consciousness into consideration enough and his explanation aligns more with what we learn in class. The idea of “first person science” is a bit hard to differentiate from just cognitive interpretation or interpreting phenomena while focusing on mental/cognitive processes. Is there a clear or particular distinction between these concepts or are they practically the same?
ReplyDeleteAlthough I will be repeating a bit of everybody's concerns: Reflecting on Dennett's "The Fantasy of First-Person Science," I find his dismissal of the Hard Problem troubling. Dennett seems to evade the fundamental issue of subjective 'feelings' by relegating feelings to mere cognitive illusions within his heterophenomenological approach. This approach, while methodologically and bases on gathering objective data, sidesteps the crux of studies on feelings--the 'why' and 'how' of feelings. Dennett's reluctance to engage with the Hard Problem leaves a critical gap in our understanding of feelings, reducing these rich, subjective feelings to mere observable data. As cognitive science advances, it's imperative to confront these 'hard' questions head-on, rather than circumventing them.
ReplyDeleteDennett discredits the HP because for him there is nothing over and above brain processes. According to him, brain and language produce an illusion of consciousness but we’re ultimately like T5 robots, as all of our cells are little, mindless robotic entities that come together to create our feelings - which are apparently beliefs. So maybe there is not much of a difference between Anais and us after all...By using Turing tests to find a solution to Kant’s HP, he implies there might be nothing more to cognition than computation. This perspective is innovative and seems to bridge the gap between philosophy of mind and cognitive science but it doesn’t sound quite right. Is the process of distilling felt states into neurological data (as he aims to do it with heterophenomenology and his intentional stance approach) enough to understand how and why we feel what we feel? Can we really measure differences in feelings through verbal reports? He might answer that feelings aren’t a thing , but doesn’t the process of believing involve a ‘feeling’ aspect?
ReplyDeleteIn response to the questions raised about Dennett's perspective on consciousness, I would say that Dennett's argument, which equates brain processes with the entirety of consciousness, indeed challenges traditional notions of subjective experience. By asserting that feelings and consciousness are the results of computational processes in the brain, Dennett aligns with a more mechanistic view of cognition, which may be seen as reductionist by some. This approach indeed bridges the gap between philosophy of mind and cognitive science by treating consciousness as an observable, measurable phenomenon. However, the process of distilling felt states into neurological data, as Dennett suggests with heterophenomenology, may not fully capture the subjective aspect of feelings and experiences. While verbal reports can provide insights into a person's internal states, they might not encompass the entirety of the subjective experience, including the nuances of feelings. Dennett's view that feelings are not separate entities but rather byproducts of brain activities and beliefs challenges the traditional understanding of emotions and consciousness, raising questions about the subjective nature of our experiences and the role of belief in shaping them.
DeleteDennett emphasizes the importance of using heterophenomenology, which uses the third person approach in scientific research. The third person approach means that the cognitive scientist will take both the subjective experience of the participant and the related objective measures to find an explanation for a phenomenon. I tend to agree with Dennett’s agnosticism that we don’t know that phenomenon like “gut instinct” is true, therefore we need to approach those topics with the scientific method.
ReplyDeleteAn example that came to me while reading about heterophenomenology was phantom limb. The cut nerves that used to end in the severed limb end up innervating surrounding tissue, like muscle tissue, and when the surrounding tissue has pain, the brain interprets the signals from these afferents to be from that missing limb. This is a great example of using both the subjective experience of the patient (they truly are feeling a pain in their missing limb) but objectively science has found that the pain is not from the missing limb but from the severed nerves innervating other tissues.
This is not about the HP, I don't think Dennett was addressing the HP at all in this article. He's talking about the EP in cogsci which is actually possible to investigate.
DeleteChalmers raises a critical issue regarding heterophenomenology, highlighting its limitations as an empirical approach to studying phenomenal consciousness. The major limitation being that it fails to capture the intrinsic sensation of sentience or feeling on its own. Both you and I recognize what it feels like to believe something, which is a vital element of our phenomenal consciousness that heterophenomenology overlooks.
ReplyDeleteHeterophenomenology may be our current best empirical form of studying consciousness I don’t believe it is enough to answer the Hard Problem as it doesn’t encompass everything we feel. I don’t believe this makes heterophenomenology a useless endeavour however, just an incomplete one that is limited by both the technology we have and its framework. I do wonder if substituting some of this framework with works (research) in loss of consciousness is in any way useful to bridging any missing aspects of this frameworks. This would be akin to the research Dr. Adrian Owen has done; however, I don’t know if it is appropriate to mix the 2 in this case, or if they can be supplemental.
I also wanted to add to this, that I believe that Dennett and others attempting to study consciousness purely empirically fail because consciousness or feeling isn't material. It is something immaterial brought on by the material world, therefore trying to study it through a purely materialistic lens is bound to have inherent flaws. It was mentioned in another comment that the term emergent properties is a full on weasel word as it doesn't actually explain anything and I would agree. It says its in the biology we just don't know where, which is a useless statement.
DeleteDennett talks about heterophenomenology, which is the third-person approach to study consciousness, and argues against Chalmer’s first-person approach. As others have already mentioned above, he reduced consciousness to its material/physical processes (basically disregarding the hard problem by assuming our feelings are measurable and avoiding the question of how and why our feelings arise), and believes that empirically studying its functions and processes is the way to go.
ReplyDeleteI ended up reading more literature on heterophenomenology, and while interesting, I take issue with focusing the "bracketing" on the feeling matters, why not bracket the objective data too? it seems to be overly behaviorist in assuming objective precedes subjective, a proper heterophenomenology in my opinion, need bracket the objective.
ReplyDeleteHow would one account for placebo? wherein feeling (seems to) cause the objective.
Dennet’s paper clarifies his heterophenomenology which to me is not a valid path to Hard Problem solving, merely the maximalization of our easy problem solving capabilities. I would classify Dennet as someone who should believe in the hard problem given what he’s written in this paper. Heterophenomenolgy (as others have pointed out above) does not address the core of the hard problem, namely how and why we feel what we feel. I do, however, agree with his “Turing” style of easy-problem solving. I think Dennet is rightly frustrated by the lack of options that the acceptance of the Hard Problem leaves us with to get at his “first person experience”.
ReplyDeleteThe Libet’s family of experiments sparked my interest and it got me thinking about the nature of feelings, temporally that is, and I am trying to come to some semblance of a conclusion on if it matters to the Hard Problem AT ALL. Let’s say the feelings come after the physiological RP (as the data seemingly agrees with) that is just even more of a headscratcher, passing over the fact that this of course throws free will into question, but that’s neither here nor there. WHY on earth would our feelings be ‘epiphenomenal’, that doesn’t really have any logical basis at all in reguards to some Darwinian explanation of our feeling capacities and likely why it is not taken seriously at all. But if the data points us that way? who are we to say it’s wrong just because it makes our lives as cognitive scientists harder.
I just have some questions following our Week 10 discussion.
ReplyDelete1. What is the position on sleep as a direction to work on the HP? When we sleep (except REM) it feels like nothing, but the body is still working. Are we zombies in the part of sleep that feels like nothing?
Feeling occurs when the brain frequencies have low amplitude and high frequencies and no feeling occurs when the brain has high amplitude and low frequencies.
Sleep walking is also an interesting case, do sleep walkers feel while they are sleep walking? Or are they truly in zombie mode, where they are existing and doing what they can do without feeling?
Is studying feeling/not feeling during sleep the EP or the how of the HP?
2. Could you explain why the how of the HP is not a part of the EP? How we feel to my understanding is a functional question and that would be part of doing (EP). If not, could I have an example of the how of feeling please?
1. Yes, we are (temporarily) zombies in delta sleep, and under general anesthesia -- and permanently in (some) chronic vegetative states. But we lack most of our EP functions then too, except some vegetative ones. (Study it, but not in animals. Hurting or killing animals to putter with the HP would be as cruel and wrong as so much other animal research that is not life-saving but career- or funding- or fashion- curiosity-driven -- or just idleness, indifference or incompetence.)
DeleteI doubt that crude frequency measures will tell us much about the capacity to feel in particular, but sleep electrophysiology is not my area.
It is not clear whether sleep-walkers are feeling anything at all during an episode. Most, but not all, do not recall it if awakened. It can't be like "walking general anesthesia" because some sensorimotor function is still ongoing. I don't know of "altered states of consciousness [altered felt-states]" research casting light on the HP rather than on altered, cyclic or pathological sensorimotor functions. So I think sleep research is on EP rather than HP. But perhaps that's just lack of information or imagination on my part...
2. Let's take the difference between detecting a sound and hearing a sound: A microphone can detect a sound, but it cannot hear anything, because hearing is a state that it feels like something to be in whereas to detect a sound need not be. The EP can explain how we detect sound, but not how (or why) we hear it. The same can be said of responding to a sound: a microphone can be wired to an effector that triggers an alarm if it detects a sound, feeling nothing. But if a normal person triggers the alarm it feels like something to do it. So both processing sensory input and responding to it with motor output can be either unfelt or felt. Explaining how and why we can detect sensory input and produce motor output is EP and explaining how and why it feels like something to do that is HP.
When reading about heterophenomenology in this week's reading, the approach it took, which challenges the first person science and emphasizes the need for a more interdisciplinary approach (3rd person scientific approach of listening to the subject's self reports but also utilizing other objective information like brain activities), made me think of how this method relates to a lot of sleep research methods.
ReplyDeleteFor example, when scientists study the phenomenon of 'lucid dreaming', they take in the self reports (sleep journals) of participants, but also use methods like EEG to gather objective evidence. Since phenomenons like lucid dreaming are hard to study, objective methods are not sufficient and therefore self reports (first person science) are crucial.
The optional reading Animal pain and human pleasure made me think about another course I took before, they touched on how humans have the tendency to anthropomorphize animals and inanimate objects. We tend to attribute human characteristics and emotions to them. Most anthropomorphic behaviour is not supported by scientific evidence but rather by our intrinsic need to be able to relate and understand others. Not that I do not believe that animals are able to feel just like us, but I was just wondering how are we able to prove that it isn't just plain anthropomorphism? The debate on whether animals have consciousness is a lot more complicated, especially since we can't even solve the case for human consciousness.
ReplyDeleteHi Andrae, I'm wondering what you mean when you say we can't solve the case of whether humans have consciousness? I'm not sure there is anything to solve, since we know that we feel (which is the only thing we can be certain about beyond formal mathematical proofs according to Descartes). Since consciousness can just be replaced by the word "feeling", we know whether humans (at least ourselves) have "consciousness". I also think the commentaries on the 11.a reading ("Why fish do not feel pain") did a great job of answering your question on why believing animals feel is not just an anthropomorphism, as these commentaries include empirical evidence suggesting why the "Why fish do not feel pain" reading is mistaken. I'm sure you have read those since making this comment, but just thought I'd make this connection between this question and those readings!
DeleteHi Jessica! You’re right, we are certain about our own feelings, but we still cannot be sure about other humans due to OMP, let alone other species. And yes! The later readings and commentaries definitely cleared up some questions I had, thanks for pointing them out.
DeleteI share your opinion regarding the impossibility of solving the hard problem, and it makes me think of cognitive science as an incredibly frustrating field. It seems like we haven't made impactful advancements in addressing the hard problem – it feels like we’re just stating theories and trying to solve the easy problem. Even if we were to achieve that, how do we proceed to move up to the hard problem? There is a saying that goes: “A problem well stated is a problem half-solved”. We've been actively working on the "stating" aspect, particularly in this class, by eliminating weasel words, clearly distinguishing between "just-so" and irrelevant theories, and fine-tuning our question appropriately: how and why can we feel. However, where do we proceed from here? If we were to solve the "easy problems", what are the next steps or directions to explain feeling? What would constitute an empirical research program, and could it potentially become clearer through the resolution of one of these easy problems? To me, it appears unlikely, resembling a dead-end and demoralizing quest.
ReplyDeleteStevan V & Natasha: very thoughtful reflections and synthesis.
ReplyDelete(But Natasha, I don't think reverse-engineering cognitive capacity is anything to sneeze at, even if we dub it "Easy" and it cannot explain why we are not zombies. Week 11 is about how and why feelings matter, despite the OMP and the HP.)
In Daniel Dennett's "The Fantasy of First-Person Science," he tackles the idea that studying consciousness solely from a personal perspective is overly optimistic. Dennett argues that our ‘beliefs’ aren't as clear-cut as we think and write about the limitations of introspection, suggesting a different approach to understanding consciousness. However, I am not sure I agree with Dennett’s use of the word belief. I think feelings are felt, and not believed to be felt.
ReplyDeleteEach thinker in the realm of understanding consciousness covered in the sky reading presents their own unique perspective on what constitutes thought and perception of the self, specifically the human self. I was thinking, what if we were to design an AI or robot based on non-human models of consciousness. Those of animals, or even plants (now that there are certain studies that suggest plants could be conscious). How would this challenge our current understanding of consciousness if at all? What new insights could we gain about consciousness by studying the different ways in which varying organisms interact with the environment?
ReplyDeleteIn ‘Animal pain and human pleasure’ Stevan Harnad highlights the ethical dilemma of human treatment of animals, emphasizing the urgency of reducing and abolishing animal suffering. Using the Turing Test analogy, he challenges perceptions of consciousness in both humans and animals, underscoring that animals, including mammals and birds, possess the neurological basis for consciousness. This understanding raises ethical concerns about the widespread mistreatment of animals, particularly in slaughterhouses. Harnad criticizes societal indifference and misconceptions about the necessity and humanity of meat consumption. He advocates for a shift in legal and societal attitudes towards animal suffering, calling for laws to protect animals and urging recognition of their unnecessary agony.
ReplyDeleteThis week's delve into Dennett's 'The Fantasy of First-Person Science' brings to light the enticing nature of consciousness studies. Dennett's skepticism towards first-person accounts as a reliable source of scientific data is particularly intriguing. If subjective experience is as fickle as Dennett suggests, where does that leave the study of consciousness? Can we ever claim to understand another's mind, or is it forever a private and impenetrable realm? Perhaps it's in the shared patterns of behaviour where the essence of understanding lies, not within the depths of subjective narration.
ReplyDeleteI am attracted to this quote “‘that conscious experiences themselves, not merely our verbal judgments about them, are the primary data to which a theory must answer.’ (Levine 1994)”(Dennett 4).
ReplyDeleteI think this statement delves into a perspective commonly linked with phenomenology and certain branches of philosophy of mind. It shows that, when delving into the world of consciousness, the primary attention should be given to conscious experiences as they are directly felt or lived. Instead of only relying on verbal descriptions or external judgments, the emphasis is on the subjective, personal encounters with consciousness.
Whether this perspective is deemed "right" or "wrong" hinges on one's philosophical stance. Certain philosophers or researchers stand for the importance of subjective experiences in understanding consciousness. In contrast, others may argue for a more inclusive approach, considering a broader array of factors like neural processes or behavioural observations.
This stance reflects a particular philosophical position rather than an absolute and universally agreed-upon truth. It underscores the diversity of perspectives within the intellectual discourse on consciousness.
I agree with your points, Stevan and Natasha. The way I've come to terms with the impossibility of solving the HP is that it is outside of the realm of the scientific method. This is, to me, because there is no observable, measureable data to extract from other people's subjective "feeling" experience and no hypothesis to test its actual existence. I think this is where DD's heterophenomenology stems from, is the acknowledgement that something needs to change (DD decides to play with the definition of feelings, which he rebrands as "belief that there are feelings") in order for there to be some kind of satisfactory answer. As an avid reader, I think the solution to the HP lies in the Arts (to the great dismay of the scientist in me). Reading Virginia Woolf feels leagues away from reading James Baldwin, because the matter at hand is capturing a moment's feelings, and Baldwin and Woolf do this in their own personal and masterful ways. The arts do this, they capture what is it like to be in someone else's head. Solving the HP, then, would also tell us how it is that art moves us. I know I am going way off topic, and I know the professor will probably tell me that we are not doing philosophy, we are not doing ontology, we are doing science. I am only addressing my opinion about the seemingly dead end that is the HP.
ReplyDeleteHeterophenomenology allows the analysis of qualitative data. People’s experience and feelings can then be included as a supplemental scientific method to quantitative measures, such as brain activity. The truth value of the subject’s feelings is not the concern because the objective and neutral is established by the 3rd person view, an accumulation of interpretations.
ReplyDeleteDennett argues that it's sufficient to study consciousness through heterophenomenology - a 3rd person method that records all the verbal and behavioral outputs of a human, and constructs interpretations of the individual’s beliefs and intentions from that raw data. The common criticism of heterophenomenology is that it leaves out actual felt experiences, accounting only for beliefs about experiences.
ReplyDeleteChalmers points out that heterophenomenology would wrongly attribute feeling to a “molecule-for-molecule” copy of himself which behaves identically, but lacks feeling. Dennett doesn’t seem bothered by the possibility that heterophenomenology is completely ambivalent to the presence or lack of presence of actual feeling in a subject. He claims the presence or lack of presence of feeling in a subject is irrelevant, because it would change nothing in the overall study of cognitive science. Even if there were philosophical zombies, Dennett asks “What experiments would you do (or do differently) that you are not already doing?”
It seems to me that heterophenomenology fails to distinguish between a subject actually feeling, versus meerly displaying behaviours that indicate one believes that they feel. Therefore, I think heterophenomenology skirts the hard problem of consciousness, and limits itself only to the study of complex behavioural patterns, thereby failing to address the question of how and why some physical systems actually feel.
Spot on; I agree entirely with your points, as you've detailed a lot of "weasel words." Bear with me to answer here. It is crucial to remember that our feelings are in a wide range which stretches beyond these behavioral reactions. Take the instance of accidentally stepping on a nail- the sensation of acute pain experienced is a direct phenomenological experience, it isn't merely an abstract concept that we choose to believe in.
DeleteDan's theory, on the other hand, seems solely predicated on the stimulus-response behavioral model, where considerable emphasis is laid on the response mechanism. From this perspective, attributing the Hard problem create an incomplete narrative that leaves out several essential components of spontaneous and natural stimulus.
Furthermore, the idea of reverse-engineering a T5 Zombie in this context appears to be an impractical resolution. It fails to address the unpredictability and the inherent uncontrolled reactivity seen in organisms when confronted with any form of stimulation. This clearly indicates that there is a gap in our understanding of why and how these uncontrolled responses occur, thus underlining the need for a more comprehensive approach to solving HP.
Dennett introduces the idea of heterophenomenology, which suggests studying feelings through a 3rd-person point of view, and criticizes Chalmers's suggestion of studying feelings through the first-person point of view. As others have said, he fails to acknowledge the importance of explaining how and why feeling organisms feel. In doing so, Dennett is essentially stating that the hard problem does not exist and therefore what he is arguing is only relevant to the easy problem, which has already been thoroughly studied and solved in the field of cognitive science. It seems that he is just avoiding the HP altogether because denying it is easier than admitting to not being able to solve it. This avoidance, in addition to Chalmer’s simply giving it another name without providing a way to solve it, is discouraging and makes me wonder if we will ever come close to solving the HP at all.
ReplyDeleteI agree with everything ! just to play devils advocate, maybe Dennett might contend that even if there are limitations to heterophenomenology, it still provides a useful and pragmatic framework for studying consciousness within the scope of cognitive science and that The idea of reverse-engineering zombies could be seen as a thought experiment rather than a practical resolution?
DeleteThis comment has been removed by the author.
ReplyDeleteWhile first-person accounts can provide valuable insights into people's feelings and experiences, they are inherently subjective and may not always accurately represent the objective reality. The consequence of this method is that potential biases and subjectivity can arise in interpreting first-person experiences; personal biases, cultural influences, and individual perspectives can significantly impact the interpretation of subjective data. It's why we don't necessarily rely on introspection, as it perpetuates the OMP as we cannot objectively measure what other people feel, so we run into communication obstacles that do not necessarily communicate if other people feel with a certainty
ReplyDeleteFor me, the most interesting part was about the impracticality of Reverse-Engineering Zombies (T5 Zombie) in Dennett's context. This approach fails to address the unpredictability reactivity observed in organisms when faced with stimulation. This unpredictability raises questions about our understanding of why and how these uncontrolled responses occur and i find that very interesting.
ReplyDeleteIn the reading, one passage that stroke me was when Dennett quoted Nagel: ‘3rd person science might provide us with brute correlations between subjective experiences and objective conditions in the brain, but could never explain those correlations’. I think, contrary to Dennett that he is right, and it ties back to Harnad’s reading ‘can neuroimaging reveal how the brain thinks?’ which makes explicit the inability of neuroimaging to explain how the brain creates the ‘subjective experiences’ even if we are able to trace the activated brain regions during those feelings. Overall, the article, instead on trying to explain feeling and how and why they are generated, discuses beliefs (in perception and feelings) and whether we can assess their accuracy, putting them at the center of heterophenomenology. Answering Kant’s question, he argues that a zombie (identical to us, with the only difference that he lacks direct evidence of feelings) just has a false belief about that direct evidence. That he falsely believes in the phenomenology of his feelings, while a human rightly does.
ReplyDeleteMy post keeps getting deleted from the blogger. Here is a link to it.
ReplyDeletehttps://drive.google.com/file/d/1pNxjfrnqvEw29VJgE4DGlydE-1yKV1wn/view?usp=sharing
Heterophenomenology is an interesting approach to science. The idea is to collect objective raw data and approach it without bias towards a particular outcome that we may be expecting, from the perspective of a “3rd person”. First-person science is when we do use our subjective experience to influence our expectations about science, more-so looking to confirm or deny an idea we already had. Heterophenomenology is a methodology to combat these potential assumptions, gut intuitions, or confirmation biases that may affect our conclusions.
ReplyDeleteI am a bit confused by the Zombie example, as if the Zombie was T5 equivalent to Chalmers would it not have the same internal states and feelings as Chalmers? How would he know that it does not possess them? He has the same vibes as the person I saw that asked on Reddit how to cite something that came to them in a dream in APA 7th edition.
In the section on meaning it is discussed that it feels a certain way to say something and to know what you are meaning when you're saying it, and that even when you know what you're saying it my come out as jibber jabber, alternately they ask what it might be like to not know what you’re saying but to seem like you do know. this makes me think about the difference between saying something true and knowing something true (not just the difference of verbalizing it) the difference between just correctly guessing at something true versus having the proof that something is true
ReplyDeleteI've clocked 11 years of veganism. Essentially I gained consciousness after watching if slaughterhouses had glass walls. I think that the comparison that is missing in this article is that we’ve created a system by which we can gain something by consuming animals and their secretions, whether that be supplying people with jobs (regardless of the scathing experience of making a living off of killing) or by getting nutrition (even if so much of animal consumption is in the form of food that isn't healthy or good for you, even if you can supplement that nutrition in other ways). Usually the only time we can gain something by hurting that scale of humans is war. I think something a lot of vegans have trouble with is the comparison of humans to animals, as noted in this article humans are animals, but the second you try to say what if we put humans in the conditions that we put animals it immediately is interpreted as comparing farming to human atrocities, as if one is equating those victims to animals.
ReplyDeleteOne of Dennett's critiques of Chalmers is that Chalmers has prescribed no research program for the HP. I would say this is an invalid critique: the HP is called Hard for a reason, we don't even have an idea of how to approach it, but that doesn't mean that feeling isn't feeling... obviously. Hopefully someday we will have a research program, but the fact that we currently do not does not negate the logical necessity that feeling cannot be observed through "good old third person sciences". Levine was right when he said conscious experiences themselves, not just our verbal judgements about them, are the primary data to which a theory must answer. First of all, of course, verbal judgements can only tell us so much about a person's feelings. Language can be used a great tool to express one's feelings, but we would never say that this informs us exactly how they are feeling (=feel what they are feeling). Second, "diverse data like behavioural reactions and physiological changes to scientifically explore subjective experiences" don't get us much closer. Feeling just cannot be attributed to any of these things.
ReplyDeleteUnderstanding consciousness requires a focus on direct experiences rather than just verbal descriptions. It questions the adequacy of language alone in capturing the complexity of conscious reality, emphasizing the importance of the experiences. It's a thought-provoking perspective that encourages a deeper exploration of the nature of consciousness.
ReplyDeleteFor example, visual point of view, no verbal explanation alone can fully and easily convey what it's like to see color, depth, motion, etc. Direct visual experience gives deeper insights. For example, I want to share my experience during my trip with my friends, but it is very hard to describe it verbally since it is just to hard to describe with very limited numbers and levels of nouns.
Heterophenomenology combines third-person “empirical data” with first-person self-reports to account for a supposedly objective measure of felt states. Dennett’s explanations for heterophenomenology seems similar to reverse-engineering a TT candidate, in the sense of both the Easy and the Hard Problem. Thus, I am trying to connect this reading to the indistinguishability of a potential TT. If the Easy Problem is reverse engineered, (we can explain how and why we do the things we can do) we still can’t know if the candidate have felt states (or consciousness?), but we also can’t argue that it does not, we simply don’t know. Even we can acquire empirical data of the correlates of our feelings, felt states are still left underdetermined: we will never be able to prove that another person (or a successful TT) feels, but at the same time we can never be certain that they “don’t feel” either.
ReplyDeleteI don't understand Dennett's response to "Option B" of the change blindness question (I don't notice the flashing white cupboard door until there is a "swift and enormous" change in my perception, "Option B": this is not due to a change in "color qualia"). To this Dennett says (footnote of page 463): "Do you want to cling to a concept of visual consciousness according to which your conviction that your visual consciousness is detailed all the way out is not contradicted by the discovery that you cannot identify large objects in the peripheral field? You could hang tough: “Oh, all that you’ve shown is that we’re not very good at identifying objects in our peripheral vision; that doesn’t show that peripheral consciousness isn’t as detailed as it seems to be! All you’ve shown is that a mere behavioral capacity that one might mistakenly have thought to coincide with consciousness doesn’t, in fact, show us anything about consciousness!” Yes, if you are careful to define consciousness so that nothing “behavioral” can bear on it, you get to declare that consciousness transcends “behaviorism” without fear of contradiction.". I don't understand any of this, especially "peripheral consciousness"... can anyone explain to me what he is trying to say?
ReplyDeleteDennet’s argument in “The Fantasy of First-Person Science” completely discounts the hard problem by construing our feelings as mere beliefs. This completely butts heads with the notion that believing is still a feeling. It is true, and I do agree, that one feels when they believe something. I felt that Dennet’s positioning against Chalmers was novel, but I can’t say I agree with him. Though this all arises from my own feelings of intuition against his approach.
ReplyDeleteDennett through heterophenomenology tries to adopt an external third person perspective in studying feelings. He tries to do so in an effort to bring the study of feelings in the realm of empirical science where it can be observed and measured. Although this is done with the right intentions, trying to quantify and measure feeling is not possible and thus Dennett’s approach outright disregards the Hard Problem which is to explain how and why we are able to feel. Whereas Chalmers comes to terms with the necessity of having a subjective aspect of feeling, Dennett believes the Hard Problem to be a mere result of framing and views it as unnecessary in understanding feelings.
ReplyDeleteHarnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroom. LSE Impact Blog 6/13 June 13 2014
ReplyDeleteDennett’s heterophenomenology which uses the subject’s explanation of their felt states as data is interesting but won’t likely lead to any results in the HP. One reason for this can be seen with felt states in animals, which we can observe although it is not proven due to the OMP. Animals have felt states but do not have language to give their subjective explanation of their feelings. So the way I see it, if this “third person” data isn’t necessary for feeling (which animals cannot give but still do feel), then it won’t be necessary in reverse-engineering feelings by cogsci.
Speaking of what English means to Searle, I finally realized what exactly is different about what my second language, accidentally also English, means to me. When it comes to feelings about language, the simplest example is names. When I first saw a short list of symbols like "Mary", it could only be an English word to me, and that word is accidentally usually used to refer to a person's name. However, for someone like Searle, who lives in the English-speaking world, he can immediately associate the name with someone he has seen, heard, known, or even loved. All of these sensorimotor interactions make him feel like he understands English, compared to me. The reason why Searle feels differently that I would, as professor Harnad explains later in the article, Searle’s English symbols are grounded in his sensorimotor capacity to interact with the things in the world that his symbols refer to.
ReplyDeleteDennett suggests that, for most of us, often a belief’s (something that feels like something to believe, as the professor puts it) existence is typically explained by its confirmation as a true belief through the normal functioning of relevant sensory, perceptual, or introspective systems. In other words, the "normal" existence of a belief hinges on two conditions: confirmation as true (despite potential false positives, as Dennett himself mentions) and the ability to operate the coping system normally. Meeting both conditions poses challenges, as incorrect confirmation or abnormal coping can hinder belief construction. Unfortunately, many easily meet the first condition without dialectical consideration (they could declare their feeling is existed, and then it would be indisputable) and the second condition is often fulfilled as long as one firmly believes in their own judgment (as feelings could be a kind of judgement or a major factor to influence judgement), and thus Dennett's proposal that a belief really can't be just a belief in having a feeling.
ReplyDeleteThe article “Animal pain and human pleasure: ethical dilemmas outside the classroom” was a very pleasant read since it summarized some of the insights we went over in class : the fact that ethically we wouldn’t kick Anais if we had proof that she was a T3 robot developed by MIT and that similarly we wouldn’t kick our pets if we had the chance. It brought me back to the fact that the only thing that matters is feeling, especially when we want to distinguish right from wrong and that it’s one of the absolute certainties we have thanks to Descarte’s cogito (sentio ergo sentitur -> je ressens donc c’est ressenti).
ReplyDeleteThis quote had me thinking: “The absence of a neocortex does not appear to preclude an organism from experiencing feeling states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness.” This is a reflection that I’ve already mentioned in a previous sky but, is the scientific enough to determine that animals feel (including us humans)? Or should we maybe supplement that by the fact that our mirror neurons trump our will to kill animals for egoistic interests?
Also, I’d love to know if we have evidence that we can feed everyone thanks to agriculture since we don’t technically need meat anymore. If you have insights from agricultural engineers or something please feel free to share them, that would be amazing.
Dennett’s Consciousness Explained is a great book, I would definitely recommend it. I think as a Dennett fan, my judgement is biased but I am in team A. Biggest reason being that one’s conclusions based on their own experiences is but a case study relying on introspection. Case studies are interesting and add to the literature but they shouldn’t be used for generalizing because there simply isn’t enough data. Same thing goes for introspection as well. I think that if the two teams were to be racing to solve the Cartesian and the Kantian questions, team B would win but at a cost. They would reach a conclusion faster because introspection is faster than trying to figure out what is going on in someone’s mind. Team A would have to find a research method that can scientifically prove that that, whatever it might be, is what’s going in that person’s mind. Team B would be faster but less reliable and Team A would be slower but would have the potential to come to a more reliable conclusion and pave the way for more and deeper understanding.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteI watched Dennett's video and found his main ideas interesting yet very hard to connect. He first wanted to distinguish access consciousness from phenomenal consciousness. Access consciousness as being mental states that make us aware of something, and phenomenal consciousness Dennett seemed unsure of at the beginning. He gives us the flag after-image example, showing how we have an experience of a red line after the flag image is no longer present, but what he says is that the red line does not actually exist anywhere. But, since we have this experience, we have a mistaken conception that the red line may exist somewhere. Later on he explains Hume's strange inversion which is that we mistakenly project causality to objects. We misinterpret inner action as an outer cause, so in the flag example, we mistake the inner experience of seeing the after-image to an outer cause of the red line being somewhere. My assumption is that he's giving all these examples to illustrate the point that we have experience and often attach wrong causes to them because we don't actually know how some of our experiences come about. Is this what he is saying is the difference between access and phenomenal consciousness? In access we are aware of our conscious experience and in phenomenal we are unaware and so we often attribute incorrect judgments of casualty? I would appreciate some clarification because as I said above I found his talk disconnected and inconclusive.
ReplyDeleteThe way that I more generally understand phenomenal consciousness is that is is a type of consciousness in which we are experiencing the world but we are not consciously aware of our experience.
In Dennett's work on heterophenomenology, he combines 'hetero' (meaning 'other') and 'phenomenology' (the study of consciousness) to create a method that both respects and critically examines subjective experiences of feelings. This approach does not sidestep the 'hard problem' of consciousness — the quest to understand the origins and nature of subjective experience — but rather redefines it. Dennett suggests that subjective experiences can be rigorously studied within the empirical framework of science, treating them as data to be explained rather than as impenetrable mysteries. I think this approach has merits, but determining whether the scientific approach will be enough to solve the hard problem is an open question.
ReplyDeleteDennette’s view does not deny the existence of feelings. He is not saying that our feeling is an illusion but is suggesting that the belief (which is just a type of feeling) that our feeling can proof anything is illusory. As others have mentioned, Dennette is hence suggesting that there is no Hard Problem at all. Dennette seems to be rejecting to credit our feelings simply because they are prone to illusions. I think, however, this does not prevent us from exploring our feelings because they still exist. Even though the felt feelings might turn out to be false, it does not prevent us from investigating why they are illusory and why they still exist even though they are false, and this is what the Hard Problem is trying to solve.
ReplyDeleteIn the section "David Chalmers as a Heterophenomenological Subject," Dennett mentions that he does not understand how Chalmers' zombie cannot have qualia. Let's consider the colors we call red and blue. What matters in practice is that everything I call red is the same as everything others call red. However, the qualia itself does not matter. As such, it is possible that while everyone else perceives the stuff we call "red" with the qualia of "redness," to me, the stuff we call red has the qualia of "blueness." If it is theoretically possible for two humans to refer to the same object but experience different qualias, then it doesn't seem unreasonable to imagine one having qualia and the other having none at all.
ReplyDelete