Turing, A.M. (1950) Computing Machinery and Intelligence. Mind 49 433-460
I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
1. Video about Turing's work: Alan Turing: Codebreaker and AI Pioneer
2. Two-part video about his life: The Strange Life of Alan Turing: BBC Horizon Documentary and
3. Le modèle Turing (vidéo, langue française)
In the “Mathematical Objection”, Turing argues that questions that open-ended questions are those “we know the machines must fail on.” I believe it may be because they often rely on sensory or emotional experiences which machines don't (yet) have. The example question “what do you think of Picasso?” requires forming an opinion based on an emotional experience. Using this question during the imitation game, I wonder how would a child-machine fare against a human child? A child interprets the world differently from adults, and Picasso’s art requires a certain maturity to understand. Therefore, just like in the child machine, Picasso might not inspire any emotions in the human child. Would this give the child-machine an opportunity to fool the interrogator?
ReplyDeleteFor now, let’s stick with the adult TT. Those are the ones with which we are best equipped to judge whether the candidate is able to say and do all the things any average adult can say and do (lifelong!), indistinguishably from any other average human. With children and nonhuman animals we are not really equipped to judge that.
DeleteAnd the TT is not an "imitation game", even though that's what Turing called it. It is a empirical to test to see whether you have succeeded in reverse-engineering human cognitive capacity: producing the capacity to say and anyything any normal human can say and do.
The TT is not about special talents that some people have and others don't. People differ in the taste in art. And some people don't have any at all.
NOTE TO EVERYONE: Please read the other commentaries in the thread, and especially my replies, before posting yours, so that you don't just repeat the same thing.
DeleteHow would we consider that machines can think, when they actually have stored interactions and will answer questions from the patterns they follow, or as said in the reading, from their table of instructions? It is said in the argument of consciousness that machines do not have feelings; and my question is that when they answer a question, they might come up with a general idea of an answer from recognizing a certain pattern, but, for example, how do they decide which words to use to answer? We’ve ruled out the existence of feelings, thus could it mean that the answer is constructed arbitrarily from a pool of options? Is that where the random element for learning machines come in to place?
ReplyDeleteWe have not discussed, yet, whether the TT can be passed by computations (algorithms), let alone which algorithms. The TT is just a test of whether a candidate can DO it all, indistinguishably from any of us (the "Easy Problem").
DeleteWe can't know whether other humans feel, and we can't know whether rocks can't feel. That's the "Other-Minds" Problem. But we do know a rock can't do all the things humans can do. So Turing says: once you can't tell them apart, you can't say one feels and the other doesn't. Humans do, and rocks don't. (Probably, but not certainly. What do you know certainly?)
But what is a "machine"? It's a causal system, just as humans, and nonhuman animals, and cars, and even rocks are causal systems.
So with the TT we're trying to reverse engineer what KIND of machine we are; not whether we are machines.
At the end of the article, it seems like Turing gives out two possible methods of how to help machines learn to think. One is to train machines with more complex instructions and rules for more sophisticated computations; another is to shape machines toward humans, giving them sensory organs to mimic the way we go through our education. I'm leaning towards the second option because training machines for higher complexity won't give us a better understanding of our own brains, unless we install machinery brains and become purely computational. Whereas, understanding our own non-computable thoughts may be possible by teaching a machine to use and interpret its senses.
ReplyDeleteI couldn't quite follow you. People can learn, or be trained. Being able to do that is all part of being able to pass the TT, so it's part of what we have to reverse-engineer.
DeleteSensorimotor capacity is certainly necessary for part of the TT too: T3 (what's that?). But the goal is to produce and explain the capacity, and then test with the TT whether it's really there, not to "imitate" it".
I am wondering, since we can increase the capacities of computers by expanding the instruction table, if a child is raised isolated in an enclosure without human interactions other than a screen displaying rules (supposedly the child also learnt to read this way), will we be able to observe the increase of capacities as expected? Or the child will act completely differently from expectations?
DeleteThis is a question for experimental child psychology.
DeleteThe Argument from Consciousness raises vital questions about subjective experience and our ethical duties to AI entities. While the test primarily considers performance, it avoids the "hard problem" of consciousness, highlighting the need to delve deeper into AI's potential to be aware and experience things. Additionally, it forces us to reevaluate our ethical framework regarding AI, challenging us to extend moral considerations to machines that are indistinguishable from humans. The reading has caused me to question how our moral responsibilities should evolve in response to the increasing human-like capabilities of AI?
ReplyDelete[Before we get carried away with hypothetical "AI entities" and our "ethical duties" toward them, some day, if and when they can pass the TT, a moment's thought might be spared for the countless non-AI entities (living, feeling, bleeding animals) that are already here now, and being treated so cruelly by us that it is almost obscene to mention "morality" in the same breath.]
DeleteThe TT does not avoid the Hard Problem (or the Other-Minds Problem): It has no choice but to ignore them, because Turing's method (which is really just empiricism: to observe the observable, andtry to explain it causally, and then test the explanation) cannot observe feeling, only it's behavioural (T2 & T3) and physiological (T4) correlates. (Notice the weasel-words I've left out: What are they? There were four, plus a new one "complexity.")
I now comprehend that this was an inaccurate characterization. The Turing Test is only capable of evaluating an entity's ability to do things. While the TT can address the "easy problem" of understanding how and why organisms can do what they do when thinking, the "hard problem" of why and how organisms feel remains a separate and complex challenge. The distinction between doing and feeling is essential, and the TT primarily addresses the former, emphasizing that the hard problem of feelings cannot be resolved through the TT alone.
DeleteAs for the weasel words, I'm not sure if you're referring to the ones I used in my comment. Looking back at this now, I can clearly see some obvious ones like "experience", "aware" and "consciousness". But I'm not sure what you're referring to with the word "complexity".
DeleteConsciousness is a weasel word for feeling because you can't be "conscious" of something without feeling it and you can't have an "unconscious" feeling. For the same reasons, you also can't have an unfelt feeling. Consciousness is just one of many weasel words that vague and misleading in the sense that it gives the impression that the word is different from what it really is, in this case, feeling.
DeleteTuring gives a very practical definition of machine “thought”: the ability to convincingly mimic the intellectual abilities of a human. He deals with many objections to this definition, and to his proposal that machines will soon be able to meet it. His response to a few of these objections shows that a machine that can think in the Turing sense would not be a solution to the easy problem of consciousness. Firstly, although he acknowledges that the human brain is an analog system rather than a digital one, he believes that the variability of analog systems can be mimicked by introducing an element of randomness to the calculations of a digital system. But if we think that cognition is not just computation, which Turing seems to believe as evidenced by his description of the brain as a continuous machine, then understanding how a “thinking” digital computer works would still leave gaps in our knowledge of how the human brain produces thought. Additionally, he’s very comfortable with the idea of a “thinking” machine whose inner workings we do not understand. Although such a machine might mimic human cognition well enough to win at the imitation game, we still wouldn’t understand how it did so, and the easy problem would remain unsolved. Of course, since Turing isn’t a cognitive scientist there’s no reason why he should be working to solve the easy problem of consciousness. I’m just pointing out the limitations of the Turing test as a judge for an adequate model of human cognition.
ReplyDeleteThe goal of cogsci is not to mimic thinking capacity, but to generate it, and thereby to explain how it can be done. "Stevan Says" Turing was not a computationalist (what's that?), and the examples you cite agree with that. But what about the Strong Church/Turing Thesis? And what does that imply about trying to pass the TT (both T2 and T3)? (Think of the example of trying to design a better rocket with the help of computer-modelling).
DeleteA computationalist is someone who believes that everything the brain does is just computation. The Strong Church/Turing thesis states that everything in the universe can be simulated, where simulation is defined as a symbol system whose properties and symbols can be interpreted as real properties of a real thing. For example, NASA models their rockets on very advanced software which allows them to estimate how it will perform and fine tune its features without actually having to build all those different rocket versions. Similarly, a model of the brain may be able to be simulated and then refined in order to produce an effective Turing machine. This kind of reverse-engineering had already begun to emerge with the creation of simulated neural nets, which are computer models which attempt to simulate neural networks of the brain and produce learning akin to that of a human being.
DeleteI found that the paragraph titled “Arguments from Various Disabilities” does not fully refute the arguments in question. Turing explains that machines can make either “errors of functioning” or “errors of conclusion” and that in the imitation game the machine would make errors of conclusion in order to seem more human (that is, capable of making mistakes). I understand that a machine could make any number of mistakes to confuse the interrogator, but I think that the distinction between human and machine lies more in which mistakes are made. Mistakes that humans make come from a multitude of past experiences, judgements, feelings, and just quick oversights. For a machine to make a mistake that fits the situation is much more challenging than just a simple computation error or “error of functioning”.
ReplyDeleteThe TT is not a game. The candidate is not trying to confuse anyone. The TT is a test of whether the candidate can do everything an average oerson can do
DeleteMistakes are irrelevant, unless they are mistakes a human would never make. What matters to the TT is having the capacity, and indistinguishably from an ordinary person.
I agree Megan. Programming the machine to deliberately make the mistakes a human would make does not feel like a genuine refutation to the argument. Mistakes may be irrelevant to the TT since all that matters is the indistinguishable capacity from an ordinary person, but they are most definitely relevant to the question of thinking. How can we say that a TT can generate the capacity to think when through its computations it is unable to even generate a genuine mistake? Mistakes are a hallmark of thinking as we know it. A TT with the capacity for mistakes built-in could be said to pass the TT but could not be said to be generating the capacity to think.
Delete(1) Passing the TT does not necessarily mean passing it by computation alone.
Delete(2) There can be approximate algorithms and prababilistic algorithms.
I’m not sure I understand the jump that Turing is making between the question “Can machines think?” and relating it to "Are there imaginable digital computers which would do well in the imitation game?". It is unclear to me how answering this second “improved” question would provide us an understanding of the issue more relevant than continuing with the previous version of the question. It relates, I think, to the objection that Turing was laying out in “Lady Lovelace’s Objection” when he affirms that just because the information is not available at the time to support the claim that a computer has a certain property, doesn’t mean it doesn’t have said property. In the same sense, I’m not sure how observing whether or not the machine is capable of playing the game would truly prove or disprove its ability to think.
ReplyDeleteSee the reply above. The TT is testing whether cogsci has succeeded in reverse-engineering our cognitive capacity.
DeleteOriginally we were trying to answer the question “can machines think?” but this is too ambiguous because how can we prove it? And what is the definition of ‘think’? So the question was adapted: “if a machine replaces A in the TT, will the interrogator decide wrongly as often (about the sexes of A and B)?” In other words, can a machine perform indistinguishably from a human in the TT? The article clarifies the question further: “Are there imaginable digital computers which would do well in the TT?” Machines execute limited functions, but today’s digital computers are universal machines that in theory could pass the TT if only we could teach them how.
ReplyDeleteNow integrate that with what was learned in class about (1) reverse-engineering, (2) the Turing Test, (3) the criterion of indistinguishability
DeleteTo create a machine with the performance capacity of a human, we would have to reverse engineer how we do what we do. This is assuming that the machine must do things in the same way as humans, in other words, that the criterion of indistinguishability is to function identically. Alternatively, the TT is feasibly passed by a machine that is indistinguishable from humans in performance abilities but not in function/methods.
DeleteWhat is the difference between "weak equivalence" and "strong equivalence"? (Turing/w.e., Pylyshyn/s.e.). And was Turing a computationalist? (What's that?) Can there be weak and strong equivalence for a non-computationalist?
Delete(Any one can answer, not just Nicole.)
Strong and weak equivalence describes the differences in how computations can perform. Weak equivalence can be seen when two algorithms have the same input and produce the same output but do not need to perform the algorithm in the same exact way. This opposes strong equivalence, where two algorithms have the same input and produce the same output but must complete the algorithm in the same way as each other. Computationalism is a concept that asserts that cognition is composed entirely of computation. Turing was not a computationalist in this way. Equivalence is a concept unique to computation, however there are other theories aside from computationalism that still utilize the idea that computation is part of cognition. So it seems that there are still applications for strong and weak equivalences outside of computationalism as long as they are still being used in reference to computations.
DeleteTowards the end of the reading, it is suggested that instead of attempting to create a programme that simulates the adult mind, why not simulate a child’s mind as this would eventually lead to the adult brain through proper learning and education. The problem thus divides into two parts; the child programme and the education process, both highly interconnected. Here, I found the connection between this idea and evolution very fascinating, since it was mentioned that it could be much faster than evolution (e.g. survival of fittest can be sped up based on the identification of advantages.) However, I would argue that this process can also be slower than evolution, when it comes to creating a “child programme” and its “education process”. Sure, once we have created it, perfecting it should be relatively less timely but the complexity of the programme itself should require extensive trials and thought, especially since we currently still do not have an answer to the hard problem.
ReplyDeleteLearning during a lifetime produces one learner; evolution produces a whole gene pool. But language (which is partly learned and partly a genetic capacity) provides a third way to gain and transmit information (yes, information!)
DeleteThe human capacity to learn is itself part of the Turing Test, whether for a child or an adult. (The child TT is a problem for the same reason a chimp or a cat TT would be a problem: We are not nearly as good at perceiving whether the candidate has all the ordinary capacities of a child or a chimp or a cat as we are at perceiving it with an adult.
Like Melika says above, I also found the idea of simulating a children’s mind and putting it through our education process to be fascinating. However, I have to say that I disagree on the point that the process could be slower than evolution. As Turing mentions near the end of the reading, teachers are often ignorant of exactly what occurs inside the pupil. As is the case with children, some are able to far more with the information that are provided with them than others whereas others are unable to take that information and do the most basic manipulations. I would argue that this very variability makes it that much more difficult to simulate. An example that came to mind with this unintended variability is ChatGPT; it swallowed a huge amount of information that no human could possibly be capable of swallowing and yet it produced a completely unintended chatbot far more capable than its creators had intended for it to be just like the cases of children in some respects.
DeleteWhen taken theoretically it seems godel’s theorem can be used to also argue from the other side, against the mathematical objection - there may be certain things that a human does that are inconsistent or they may not know the answer at all, making them just as fallible as the machine. A counter argument would be that in this situation we are now describing humans as logical systems. Of Course we know humans are not purely logical systems and can be influenced by a variety of factors like emotions or fatigue causing one to forget things. But could these types of “errors” or influences on a response be similar to “errors of functioning” that a machine makes. And if for philosophical purposes we assume machines do not make such errors and call them abstract, can we not also for the purpose of this specific example assume humans will not make such errors either (not assume that they don't experience these human factors of consciousness, but simply that these experiences can be distinguished from a human’s cognitive capacity when placed in comparison to a machine)?
ReplyDeleteYes. There are plenty of people who do, and plenty of people who don't understand the proof of Gödel's theorem (or couldn't care less). None of them would fail the TT for that! Nor if they act or reason illogically sometimes. (And of course seeing that the unprovable Gödel sentence is true is not the same as being able to PROVE that it's true within the axiomatic system that proved that it could not be proved in.) These are all red herrings as objections to the TT. Cogsci doesn't need to worry about them.
DeleteSomething that I found really interesting about the Turing’s test is the common misconception about its purpose. If we take Shannon and McCarthy for example, they interpreted the Turing test to be an operationalization of what thinking might be whereas Turing saw in the imitation game the ability to prove that intelligence can be achieved through another support than a brain.
ReplyDeleteBut we are still limited by what we conceive as being part of thinking or just simple computation. If we can’t reach a consensus on what thinking is based on a test that uses langage communication, should we try to address this issue through another aspect like logical reasoning or behavioral interpretation?
The only thing you need to understand about the TT is: "I've known the candidate a long time, and I've never seen anything the candidate could or couldn't do that would have made me think that the candidate could not think or feel, just like any of the rest of us. (That does not mean we always thought or felt the same thing!")
DeleteDon't you think that's plenty enough to reverse-engineer? 'Cause that's all we have with one another.
The counter-argument that seemed most convincing to me was in the refutation of The Argument from Consciousness (4). As I understood it, Turing says we have no superior way of confirming that another person (that is not ourselves) thinks/feels than we do for determining whether a machine thinks. Therefore, if a theoretical machine can pass the Turing test, meaning it accurately imitates a human, we have no ground for saying humans can think and a programmed machine cannot, until we are able to describe how thoughts are generated in our brains.
ReplyDeleteIf our reverse-engineered candidate could say and do any of the kinds of things I can (for a lifetime), what more could we ask (that we could actually test and observe)? T4?
DeleteNo "imitation": Equivalent and indistinguishable cognitive capacity, just like the rest of us.
Initially, I found it puzzling why the question of whether a machine can think is followed by the imitation game, which essentially answers a different question: how to make a machine think? The imitation game, as Turing discusses in this article, could be a reverse-engineering approach to program a machine to think like a human, rather than simply mimicking human thought processes. However, Turing raises a valid concern: how can a human distinguish whether the machine is genuinely thinking or merely learning it in parrot fashion way? I believe it is essential to connect the act of thinking with the output of that thinking process to identify any relationship that might imply a machine is genuinely thinking, rather than merely utilizing its computational capacity to produce various responses.
ReplyDeleteWhat do you mean by a machine? Sentient rganisms are living, feeling "machines": their functioning is governed by cause and effect, just like everything else in the universe. (Please, no quantum-mechanical koans!)
DeleteAs to whether a TT candidate is just parroting, why not ask ChatGPT (several times!)? And come back and tell us.
In "Computing Machinery and Intelligence", Turing argues that the question "can machines think?" should instead be framed around the imitation game, in which a machine is indiscernible from a human being in a series of interrogations. He then presents his responses to a number of challenges to his views.
ReplyDeleteI found Turing's point in the "Mathematical Objection" section to be particularly interesting, in which he says that "We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of the machines." This statement highlights the fallible, often random nature of cognition. This is built upon in the "Learning Machines" section concerning the importance of randomness in a learning machine, with the claim that "intelligent behaviour presumably consists in a departure from the completely disciplined behaviour involved in computation", and that "Processes that are learnt do not produce a hundred per cent certainty of result; if they did they could not be unlearnt". Intuitively, I am inclined to believe that humans are more malleable, fallible and random than machines, despite Turing stating that machines, too, can make errors of conclusion. It makes me wonder if it is this aspect of randomness and fallibility that crucially contributes to us being cognizers, and how different randomness is in machines compared to humans.
To pass the TT the candidate has to be able to do anything (cognitive) that a normal human can do, indistinguishably from a human. That includes making mistakes, learning, changing views, and doing things probabilistically. (What is a "machine"?)
DeleteAccording to Part 7) Learning Machine, to me, it seems to describe some concepts that are related to machine learning. What a learning machine aims to do is to mimic the intelligence of human beings, so Turing assumes to start by mimicking the mechanism of the human brain. It is a strong statement, as the prerequisite of thinking is that there is some information stored in our brain, which is collected throughout our lifetime instead of inherent.
ReplyDeleteAlso, in the 2nd paragraph of Part 7, I don't quite understand the metaphor of the atomic pile; while it seems like not a big matter to comprehend the main idea of this article (maybe?), I still hope someone can explain it to me. Thank you so much in advance.
Yes, Turing anticipated learning-algorithms. And learning capacity is a (big) part of the TT.
DeleteThe "atomic pile" point seems to refer to chain reactions; not really relevant except perhaps to T4 (thresholds; feedback loops).
It has been mentioned here, I think it is important for us not to gauge whether machines can think, but whether they can generate thinking capacity that is indistinguishable from humans. In accomplishing this goal and succeeding in the imitation game, the digital computer is essentially no different from a human (barring sensorimotor experience explored last week). Although I understand the need to avoid the philosophical implications of what it means “to think” I wonder if we can dance around the definition forever if we are to gauge if a machine can be truly indistinguishable from humans beyond the ability to generate. I also wonder if going beyond this ability is necessary.
ReplyDeleteI wanted to make a quick note about Turing’s refutation of the consciousness argument. Although I believe it’s best to just forget about it and that understanding consciousness isn’t necessary for the point he is making. I also believe that his reply, that we can’t be sure of other minds is dismissive. Searle makes a similar argument as Jefferson (as I understand it), but to dismiss that argument with skepticism dilutes the conversation.
All we know about "thinking" (cognition) is what it FEELS LIKE to do it (Descartes' "Cogito"), and that's not much.
DeleteCogsci's reverse-engineering is done to discover and test causal explanations of HOW (and WHY) we do it.
Acknowledging the "Other-Minds Problem" (that the only feeling we can observe is our own) is not dismissive: It's true! (Think about it.)
Nor is it dismissive to note that the only way we can explain HOW we think is by reverse-engineering how we can DO what we can DO. And that's what the T-Test tests.
Reminder to everyone: The T-Test is not "imitation" or "mimicry." It is a test of whether Cogsci has successfully reverse-engineered thinking (cognition).
In Turing’s “Computing Machinery and Intelligence” he rephrases and operationalizes the question “can machines think?” into “can a machine behave as a human (or thinker) does?” Turing goes on to posit that we can answer this new question through a machine's performance on the Turing Test. Throughout the paper he goes on to refute various objections to the argument that machines could perform like a human. He concludes by providing a description of learning machines—machines that can be made to simulate a mind and follow an educational programming which resembles the education we provide children.
ReplyDeleteThe section I struggled with was Turing’s response to “The Argument from Extrasensory Perception.” I found that this section seemed disjointed from the rest of the paper, and I cannot seem to grasp what Turing was trying to clarify with his response to this argument—could anyone clear this up for me? Thank you!
I am also unclear on this section. Perhaps he was bringing up some phenomenon (namely telepathy, clairvoyance, precognition and psychokinesis) that complicates anything communication, certainty and ‘thought’ related. And perhaps his solution to this is to create an environment for the TT that keeps this extrasensory perception stuff out?
DeleteI don't know if it is much help to you both but I found this on the Web: " In 1950, extra-sensory perception was an active area of research and Turing chooses to give [extra-sensory perception] the benefit of the doubt, arguing that conditions could be created in which mind-reading would not affect the test. Turing admitted to "overwhelming statistical evidence" for telepathy, likely referring to early 1940s experiments by Samuel Soal, a member of the Society for Psychical Research."
DeleteThis isn't an in depth search but from wikipedia. The source it comes from is a book chapter. The reference is below :
Leavitt, David, 'Turing and the paranormal', The Turing Guide (Oxford, 2017; online edn, Oxford Academic, 12 Nov. 2020), https://doi.org/10.1093/oso/9780198747826.003.0042, accessed 11 Sept. 2023.
Shona Good summary.
DeleteIgnore what Turing says about ESP; it's bogus. (Newton also had a koo-koo side, about "alchemy"). Science (and even Cogsci -- except when it goes Cogsci-Fi, as in the "Matrix") is about the real world, not the supernatural.
Even in physics (e.g., the many-body problem and statistical physics) you can't predict all outcomes from their initial states. But predictability is not the same as causation. It could all be perfectly deterministic, just not predictable. Or it could be indeterminate -- as in quantum mechnics (but let's not get into that: that's physics's "hard problem" and Cogsci has its hands full with its own.)
["Stevan Says" that in Cogsci the problem of "free will" is really just the problem of the FEELING of free will (and that's just an instance of the "hard problem", just as any other felt state -- green, blue, loud soft, happy, sad -- is.) The rest is all physics's problem of predictability, causation, and uncertainty, not Cogsci's.]
The most striking concept to me from this reading is those related to Lady Lovelace's Objection. It really does seem that given the initial state of the machine and the input signals it is always possible to predict all future states. Based on what we know of the mechanism of the Turing machine, there is no room for original or creative thinking. But then, what is an original thought? Perhaps there is a natural course of the universe that also applies to human cognition, implying we have no free will. I’m sure there exist distinctions and more than I know on defining creativity, but it is a fun thought.
ReplyDeleteLady Lovelace's Objection stood out to me as well, particularly how he refutes it. No, a computer may not be able to truly generate "original thought", but with the appropriate instruction table it can generate something that mimics it. Your question of "what is an original thought" is exactly what he gets at when he says that we are similarly fed information and generate ideas based on what we know. If we can reverse-engineer the way we have "original thought" from information we know and put that into instructions for a computer, that output is then indistinguishable from human original thought because we do not have access to how each other generates thought in the first place, we only see the product.
DeleteSee reply about "free will" above. And before we worry about creativity, let's reverse-engineer ordinary learning, reasoning, and language! (Here's the Open Access Version of the creativity paper; the first one was just so Csengele could see where it had appeared originally.)
DeleteIn this paper, Turing elaborates on the idea of an ‘imitation game’ which is not a game, but a test to see if a machine can produce responses indistinguishable from that of a real human being, according to a judge. This is in response to the question “Can machines think?” but lacking an operational definition of ‘machine’ and ‘think’, he instead reinterprets the question in a manner that can be observed such that would a human being believe that the responses are from another human being.
ReplyDeleteIt is very interesting to observe the ideas concerning computers and computer programming prior to the development of the technology we possess today, not only in computer operating systems but also in machine learning. In section 6, it appears that Turing himself anticipated the advancements when he says “I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning”. Even since the early 2000s, the capabilities of computers have greatly increased, as has the storage to perform these operations. It is true then, with a theoretical unlimited capacity of storage, what could we not teach a computer, such that it evolves to become increasingly human?
The TT is not only not a trick or game, nor is it imitation, but it is not Q & A. It is the test of whether cogsci has succeeded in reverse-engineering human cognitive capacity (which is not just verbal Q & A).
DeleteNot only did Turing predict cogsci and reverse-engineering; he was one of the co-inventors of the computer. But his 50-year prediction was only about how much progress he expected toward TT; indistinguishability to 70% of judges for 10 minutes certainly is not the TT. (Why?)
'indistinguishability to 70% of judges for 10 minutes certainly is not the TT' because, from my understanding, the point is to fool 100% of the judges and to do it for an indefinite amount of time (or lifelong).
DeleteYes, but please read the other commentaries and responses. Surely lifelong indistinguishability is not "fooling"! That's all we've got with one another, isn't it?
DeleteDebatable in my opinion! I think it is possible for us to be 'fooled' for a lifetime, but we will not be able to know if it is the case (due to the other-minds problem). So yes, given that we have no way of *knowing* whether the entity that passes the TT for a lifetime is the same as us human beings, we will have to accept that this entity is essentially the same as humans.
DeleteOr we can agree that reverse-engineering and T-testing can solve the Easy Problem but not the Hard Problem.
DeleteGodel’s theorem was of interest to me because it mentions how each ‘thinking’ thing has its limitations. We often answer questions wrong ourselves, so if a machine did the same, why would we be so quick to judge it as unintelligent or unthinking? It does not ‘think’ in the same way we do. There are similar limitations to human intellect, we just don’t question it the same way because we can describe what thinking feels like and we all share that experience.
ReplyDeleteWell, mathematicians (notably David Hilbert) were certainly surprised (and disappointed) by Goedel's proof; and Frege had an even more despondent reaction to Russell's paradox. But limitations on what mathematics can prove are not really limits on cognition, because although the unprovable "Goedel sentence" "The sentence with Goedel number G is unprovable" is indeed unprovable (because that sentence is the sentence with Goedel number G), we nevertheless know the sentence is true. Russell's question about whether the set of all sets that do not contain themselves does or does not contain itself produces a paradox that is false if it's true and true if it's false. But again, that is limit on certain formalisms, not a limit of human cognition.
DeleteSo these limitations are not relevant to reverse-engineering cognition. Yet it may still be true that there are limits to cognition, and not just trivial ones like limits on human memory.
But "Stevan Says" these are all (innocuous) red herrings in Goedel's paper. (And so is Lucas's argument against computationalism, which we might discuss in class Friday. The argument is based on Goedel's proof.)
Where my mind goes most after reading this is to the subjectivity of everything and of course how empiricism can only go so far (all of which Turing has of course taken into account). But my mind cannot seem to rid itself of thinking over and over about the obvious importance that a TT passing entity has to then make us think about all other ‘life’ (scare quotes because of the question of life being necessarily biological yada yada) and the question of inner worlds—other minds problem. I very much understand the course of action here to choose to look for intelligence rather than the slippery and elusive fish that is “consciousness” however, I find my interest chases that fish far down the river, where the classic qualia-color example makes an appearance: your red may be an inherently indescribably different red from the red that I see (both in my head and out in the world), there is fundamentally no way to know this and so enter in a TT passing person(?), who’s to say if they are not actually having their own experience of red—should there be a distinction drawn between human inner world experiences and machine/non-human inner world experiences (no that’s just kicking the can further down the road) but I think I just find myself where Chalmer’s probably once did and am coming to terms with the true Hardness of the Hard problem, thank you Mr T. (Turing).
ReplyDeleteChalmers will be talking here about ChatGPT and the "hard problem" on Thursday September 26, 10:30am, on zoom at https://uqam.zoom.us/j/83002459798.
DeleteBut this is a course in cognitive science, not in philosophy. So questions like "Does it feel the same way to you or ChatGPT to see red as it does to me?" are not the other-minds problem that we are considering here. Cogsci and the TT would be concerned with whether ChatGPT are feeling anything at all --- if the TT (whether T2 or T3) could test for feeling at all.
But Turing says it can't. Why not?
And "qualia" is just one of the many weasel-words for feeling (anything at all).
The TT is concerned with the machine's ability to whether cogsci has succeeded in reverse-engineering human cognitive capacity, not with whether the machine has feelings or experiences qualia. Turing's test is designed to evaluate the machine's performance in terms of its responses and not its internal states or subjective experiences. Thus, the TT does not directly address the question of whether a machine can feel or experience qualia, as it is focused on the machine's ability to exhibit human-like behavior.
DeleteCorrect.
DeleteTuring proposes that instead of answering the question: "Can machines think?" we examine whether they can perform indistinguishably from a human in the Turing Test. This is because we have no adequate definition if "think" nor would we be able to tell if a machine is thinking due to the other minds problem. As cognitive scientists we aim to answer the Easy Problem: how and why can humans do what they do? By designing a machine that meets the indistinguishability criterion of the TT and assuming we have a perfect grasp of the inner workings of such a machine, we would hope that the easy problem is solved and that we have succeeded in reverse engineering cognition. However, even having accomplished this monumental task, I believe we would not have solved the Easy Problem. At best we would have created a machine that is weakly equivalent to the human mind. We would not have succeeded in answering how it is that HUMANS can do what they do, we would merely have succeeded in demonstrating one possible way SOMETHING might do what humans can do (this alone would obviously be a monumental achievement). Given this complication, I'd be interested to hear everyone's thoughts on the importance of the Turing Test as a metric for gauging your success in reverse engineering human cognition.
ReplyDeleteHey Stephen,
DeleteI agree with you that the Turing Test has its limitations. It is true that even if we build a machine that passes the Turing Test and understand how it works, it’s not the same as fully understanding human cognition. However, I also think there is value in Turing's proposal to focus on whether machines can pass the Turing Test rather than tackling the abstract question of whether they can “think” which would be way more difficult to answer. I think it’s important to recognize that, for now at least, the Turing Test is still the best metric we have to measure a computer’s cognitive abilities, and represents a meaningful step toward addressing the "Easy Problem."
Good summary, Stephen. Now look up what "underdetermination" means and tell me what you think?
DeleteLili, don't be so sure that computation can do the whole job. We still have T3 (sensorimotor robotic capacity) ahead of us (this week), and Searle (Week 3) and the symbol grounding problem (Week 5). If Turing-style reverse-engineering would not provide "full understanding", what (if anything) could?
I agree with you, Stephen. For my understanding, computer does not reverse engineer how human perform intelligence, but it’s a weak equivalence that generate a distinguishable outputs by completely different algorithms.
DeleteI asked GPT that “underdetermination” means that “evidence is insufficient to uniquely determine a particular theory, explanation, or interpretation. "Does it mean that the same evidence or result can be explained by different theory. In this case, since we can only test its performance, we don’t have enough evidence to determined whether is a valid reverse engineering.
I’m also confused about Prof. Harnad’s idea that passing T2 require more than computation. I’m not sure what else is required. Is T3, the sensorimotor capacity, more than computation?
As I was reading the section entitled “The Mathematical Objection”, I found myself agreeing with the argument presented that opposes Turing’s views in terms of the Turing test. Indeed, even if we have a digital computer with infinite capacity, there will be times at which this computer might fail to answer a specific question and in failing to answer it, it will not even resemble the way that a human might fail to answer a question. Plus, even if it was argued that humans can also make mistakes and fail to answer some questions of the test, as I just mentioned, I believe that machines cannot accurately replicate this human experience of making a mistake or failing to know the answer to something. Moreover, I don’t think it would be possible to program a computer to follow a set of rules that describe what a human should do in every conceivable set of circumstances that they might encounter, nor during the Turing test, nor in ‘real life’.
ReplyDelete(1) The TT is not just Q & A. Why not? And what is it?
Delete(2) It's fine not to believe. But you need reasons to explain why you don't believe. Next week (Searle) we'll get a reason. (And there are others.)
2a) Computing Machinery and Intelligence
ReplyDeleteTuring holds the unique perspective that digital computers, which he defines are machines that act through discrete states, are capable of emulating humans. Examining the details of his paper and his explanation of the Turing Test, Turing doesn’t seem to be concerned with the “how” questions that Cognitive Science seeks to answer. He explores ideas that investigate the capabilities of computation, rather than seeking to understand how mind processes work— for example, he points out that man would be highly unsuccessful in imitating the machine. Thus, how could the reverse be possible? Turing quickly dismisses this, suggesting that the monumental feat lies in the computer’s ability to imitate humans. I’m curious if more analysis has been done on this point; if computers are capable of operating more efficiently than “human computers,” can it really be said that processes within humans and machines are identical? As well, from Turing’s explanation of internal states involved in discrete state machines, it is evident that there is always a possibility that human and computer internal states could be different despite the same output being generated— if this difference is largely ignored, how do we truly understand the processes involved? I’m interested in learning more about the ways in which Turing’s work has influenced the field of cognitive science.
According to computationalism, the answer to HOW is: computation.
DeleteWas Turing a computationalist?
The rest of your question is about "underdetermination" (what's that?) and about weak vs. strong equivalence. (What's that.)
And the TT is not about humans doing things better, but about reverse-engineering how to do what a normal thinker can do. (Why?)
In the paragraph titled "Arguments from Various Disabilities," Turing addresses objections to the idea that machines can possess certain human-like qualities and abilities. These objections often claim that machines can never perform certain human actions like showing kindness, having a sense of humor, making mistakes, experiencing love, or enjoying simple pleasures such as strawberries and cream. He challenges these objections by providing alternative perspectives. For example, while acknowledging the impracticality of making a machine genuinely enjoy strawberries and cream, he suggests that it might still be possible. The objection that machines cannot be the subject of their own thoughts is countered by arguing that this depends on whether machines have thought related to certain subject matter. He mentions the potential for machines to exhibit a form of self-awareness by monitoring and adjusting their own behavior and programs. He also argues that machines can indeed make errors of conclusion, depending on the tasks they are programmed for. Ultimately, he disputes these objections and asserts that technological advancements could enable machines to exhibit more complex and human-like behaviors.
ReplyDeleteCogsci (and Turing's method of reverse-engineering and testing) is about explaining how thinkers think by explaining how they can DO what we can observe that they can do.
DeleteDoing-capacity is the "Easy" Problem of Cogsci. Being able to taste strawberries is the "Hard" Problem.
Turing is a genius on computation, but in philosophy he's only a hobbyist.
Where does he explain why the Hard Problem cannot be solved by his method (empiricism)? It's when he talks about "solipsism", which he gets wrong (what is it?).
But what he means is the Other-Minds Problem, which is why his method cannot test whether it has solved the Hard Problem. (If you don't understand that yet, you will.)
Turing makes the argument that whether machines can think is too ambiguous and suggests instead whether a machine in a specific set of circumstances that avoid clues he sees as irrelevant, i.e. the texture of skin, could act as a human well enough that a person wouldn't be able to tell it was not human. There are two questions to ask now: is this question an adequate substitute for the other? and could it be possible for a machine to do this. The answer to the first seems to me that if you accept other people can think you accept that a machine that is to you indistinguishable from others can as well not because there isn't more to thinking but because there is no way to know that another person does these things anymore than a machine. The answer to the second can't be positively answered until such a machine is built, so we have to ask if there is reason to believe it definitely couldn't happen. My objections, such as originality and error, were well discussed in the paper. My only leftover thought is that many of the things that come up naturally for humans such as error would need to be added artificially, so it seems likely that even if we create a thinking machine, it would not be a perfect explanation of human cognition.
ReplyDeleteGood summary, but what do you mean by "added artificially"? Isn't reverse-engineering all artificial? It's humans modelling, building and testing "machines" (which just means human-made causal systems) that can DO everything human thinkers can do, thereby explaining how that can be done, having successfully reverse-engineered it.
DeleteHumans make mistakes when they try to do things they can't do (or don't do right this time). You don't have to build in a special "error capacity" or "error simulator" for that. Doesn't it come naturally with the territory when you try to reverse-engineer how they do what they can do right?
I do not think passing the TT is enough evidence to reverse-engineer how humans cognize. Assume a digital machine passes the TT: it is entirely indistinguishable from a human in its responses. We are actually already there – think Sophia, the AI robot that was given citizenship in Saudi Arabia… yet, I remain perplexed because this, in no way, reveals to us a causal relationship for how we think. It cannot simply be the electrical activity in Sophia’s internal hardware (alternatively, the electrical activity of neurons) given evidence from Babbage’s analytic engine which demonstrates that electrical signals are not enough to define thinking. If it’s not simply the computations… not the electrical signals we study in our Neurology classes… what is it?
ReplyDeleteWinning citizenship does not mean passing the TT. Nor would giving Lamda human rights just for saying that its worst fear is being unplugged mean Lamda has passed the TT. Can Sophia pass the TT?
DeleteNow that we have progressed in the course a bit, I am thinking about this question. I think the issue with assuming an AI robot passes the T3 sensorimotor test is predicated on whether the robot’s experience is grounded, and the robot understands the feeling of understanding. This understanding cannot just be forming associations between symbols, nor just mirroring, but the language the robot uses must be grounded and categorized based on real sensorimotor experience, which would imply that the robot is sentient and truly UNDERSTANDS the FEELING of what it’s saying/experiencing. Perhaps these AI robots perform complex learning through supervised (trial and error learning thru enviro) and unsupervised (infinite access to data through the internet) means, such that it can amass enough information to demonstrate it undergoes category learning AND predication of the referent. I still don’t know how we are supposed to know if this is happening…
DeleteI had some difficulty agreeing with Turing's response to the argument from continuity in the nervous system, although I think his response is probably adequate. The argument states that due to the importance of the size of nervous impulses impinging on neurons - a continuous, non-discrete measurement - one cannot expect a discrete-state system to be able to mimic the nervous system's behavior. Turing responds by introducing the differential analyser, which is a type of machine that does not use discrete states, and which can respond with text (as is preferable for the TT). Turing says, "It would not be possible for a digital computer to predict exactly what answers the differential analyser would give to a problem, but it would be quite capable of giving the right sort of answer." He then goes on to say that "it would be very difficult for the interrogator to distinguish the differential analyser from the digital computer." Having thought about it, I believe he may be right in saying that a human would not be able to differentiate between a digital computer's responses and a differential analyser's responses, but would we not expect gaps to appear in a computer's answers at some point that could not be explainable by human error, especially if we are to consider that the TT-passing system must be able to pass for a lifetime?
ReplyDeleteComputation cannot produce continuity but it can approximate it (coming as close as you like). That's part of the Strong Church/Turing Thesis.
DeleteMore important: Was Turing a computationalist? ("Stevan Says": no.)
After debunking a series of arguments against why “a machine can’t think,” Turing proposes a list of features for that an adequate thinking machine should/would have:
ReplyDelete1) The programme should simulate a child’s mind in the sense that it should be easily programmable and subjectable to education. This breaks down the process into two main parts of (A) creating the “child mind” and (B) the education process.
2) Communication in both directions should be possible in order for education to take place (modes of communication need not be equal; Helen Keller managed to learn)
3) A machine must understand feedback akin to “reward” and “punishment”
4) Channels of communication must be “unemotional”
5) A “random element” (this can be useful when searching for a solution to a problem.)
Good summary, but please (always) read the other comments and my replies in the skywriting too. These have all been discussed, and new points have been made beyond Turing 1950.
DeleteTo put it simply, since "thinking" refers to humans’ cognitive abilities and is difficult to define otherwise, the solution to determining if a machine can think is to ascertain if it can do like us (not only imitate) to the point where we couldn't distinguish the machine from a human. This test is not intended to assess the intelligence of a machine (incorporating human-like errors would be counter-productive) or to determine if it is sentient (TT ignores the hard problem because it can’t do otherwise). TT is more like a theoretical thought experiment, which point is: if a machine manages to pass the TT, then we will have succeeded in reverse engineering human cognition. Furthermore, if this machine is only computational, it would confirm the strong hypothesis that everything can be simulated computationally. Is my understanding correct?
ReplyDeleteBravo, spot on. (But reverse-engineering is not just a thought experiment any more, as it as in Turing's day. The latest is Large Language Modelling and ChatGPT.)
DeleteI enjoyed this reading as it improved my understanding of digital computers, as well as human thinking. I particularly appreciated the paragraph describing that though both the human computer(brain) and digital computer have electrical systems, it does not equate them. Turing states that it would be better to compare them on a mathematical basis. I can agree that in terms of mathematical computations, solving a problem would have similar steps or states.
ReplyDeleteLater on, I was interested to read about the Mathematical Objection to “Can machines think?”. To my understanding, the objection argues that because there are limitations to questions(some simple ones included) that a digital computer(even one of infinite capacity) could answer correctly, machines cannot think as humans can. I find myself agreeing with what Turing says, as he states that often humans are unable to come up with a “correct answer” themselves and that a factor in fueling the belief that machines cannot match human intellect, is a fear that one’s superiority is being challenged. I don’t currently hold the belief that computers can think, and maybe that comes from the same fear Turing was writing about. I am interested however in the rate at which technology progresses, while we as a species cannot progress(at least evolutionarily) nearly as fast. Perhaps technology can catch up, and with that in mind, I am more open to the idea that computers could get closer and closer to what we consider “thinking”.
Please always read the other comments (especially my replies) first, so you don't say what has already been said or answered.
DeleteThe TT is not Q & A (what is it?). And the problem of errors is a non-problem: why?
And what is the difference between the implementation-independence of computation (the software/hardware distinction) and weak vs. strong equivalence?
We usually attribute consciousness that justifies hurting animals, eating meat, and “respecting more” human beings than any other species to the fact that we are conscisous and sentient being, due notably to our ability to think and feel. We consider ourselves the better species because we are able to reflect and introspect. The heads in the sand objection to the problem of whether machines can think highlights this phenomenon and applies it to a broader concept, that is the fact that we cannot admit that machines are able of such things. However, this argument does not really imply that machines are not able to think.
ReplyDeleteI also have a broader question, we said multiple times that the goal of cognitive science was not just to mimic our thinking abilities but to reverse engineer it so that we can explain the mechanisms through which it occurs. But how can we try to generate it if we are not able to explain it beforehand? Is it a system of trial and error that would eventually lead us to the right explanation?
Kid-sib could not follow what point you were making about consciousness, hurting animals, and machines: What was it.
DeleteYes, science (not just cogsci) is seeking an answer, getting a hunch, and then testing it.
One part of this reading that really caught my attention was the section which discussed replicating child development into adulthood with a computer, allowing it to learn and accumulate abilities and knowledge as a human being does. There are two potential challenges that came to mind for attempting this. The first is that, as stated in the reading, “‘[the] hope is that there is so little mechanism in the child brain that something like it can be easily programmed.” It seems unlikely that a child’s mind is elementary and akin to a blank sheet. Many signs point to the presence of inherent mechanisms early in development which are used throughout the lifespan. Chomsky, for example, demonstrated that humans have an inherent capacity for language which is present even in young children. Additionally, we can see dynamic emotional states and understanding of complex social networks in young children, another sign that social skills which are inherent to human communities and connections begin early on. The second challenge that came to mind was that much of child development and learning involves the ability to apply lessons, skills, and knowledge across analogous circumstances. As a young child, you read story books which teach you about friendship and being a good person, which you then apply in a variety of real-life circumstances. If you learn a skill in a classroom, this skill is then used in different situations. It is unclear to me whether a computer would have this same ability to generalize knowledge and skills, or if each response and scenario would need to be individually learned? If so, this would be far less efficient and attainable than human development.
ReplyDeletePlease always read the other skywritings and replies before posting so you don't ask or say the same thing that's already been said and discussed. But you can build on the discussion.
DeleteBoth children and adults have to be able to learn (and to apply learning). That's an essential part of the TT.
Yes, computation can learn and apply, but nowhere near TT-scale yet. And there will be reasons to expect that computation alone will not be enough. Why not?
Computation alone may not be enough to create a machine which can learn and apply akin to the way humans do because there are components of human cognition which are not computation. For instance, internal states which do not produce an output, while not computation, may play an important role in human learning capacity.
DeleteWhile reading the theological argument part of the reading, I totally agreed that the argument about computers not having a “soul” and God not giving them one (nor to animals..) should be disregarded quite early on. However, following this train of thought, we can have dived a bit deeper. If the question is “can machine think?” and a thought is seen as a combination of experience but also innate character – what is an innate character for a computer ? If it is the way it was built – then do they all have the same innate character if they were build the same? Isn’t uniqueness an important characteristic of a thought? Can two humans have the same exact thought? Is it a thought if it exists twice in the exact same form? (would that just be a computation then?)
ReplyDeletePS: A similar question is brought up in the "Learning machines" section but it not the exact same argument as mine
DeleteCogSci is trying to reverse-engineer the capacity to think, by modelling and testing what thinkers can DO, not by trying to replicate a particular person's thoughts.
DeleteI'm interested in the theological objection to the imitation game that Turing addresses in the contrary views section. This objection as he presents it is based on the belief that thinking is exclusive to human beings, "immortal souls", but not to any animals or machines. He aptly counters by highlighting the arbitrary nature of this kind of viewpoint. He questions whether God's omnipotence would be constrained by an inability to grant souls to animals or machines if He saw fit. Turing emphasizes that theological arguments are speculative at best and have often been refuted by scientific progress. Turing's response underscores that categorizing thinking as solely a function of the soul is subjective and akin to past theological errors, such as denying the heliocentric model. Turing's response effectively challenges the theological objection by questioning its premise and highlighting the limitations of theological arguments in understanding complex scientific questions. But the objection is interesting as it reinforces an ontological hierarchy that underlie many theologies, and which was particularly relevant to Descartes' definition of the Cartesian Mind as something that only humans can possess.
ReplyDeleteBoth the soul and theology are moot in Cogsci because reverse-engineering and T-testing can only be based on what is observable, hence what thinkers can DO (T2, T3, T4). (The soul is of course a theological weasel-word for sentience, i.e., the capacity to FEEL: the "Hard Problem".)
DeleteThe section of the text that most jumped out to me (telepathy aside) was the fifth section, as all propositions given there— that aren't locked behind the problem of other minds, have happened since the article was written, and throughout the text there were wild propositions that came true.
ReplyDeleteTake Chat GPT, it has been made to follow much of current morality (which is easily circumvented, but one need not search far for "Chat GPT is Woke accusations), but can nonetheless tell right and wrong within the vision of its creators. Chat GPT can also "learn from experience, use words properly, be the subject of its own thought", the latter if one takes it as Turing did in the aforementioned section
Even "make some one fall in love with it" has happened, a whole industry of "girlfriend chatbots" already exists, to which real people have projected real love onto.
The fact that the paper ends with the idea that perhaps one should look into making chess computers was particularly hilarious, as most members in this class have lived after the decisive victory of machine over man... in chess.
Although Turing seemed, in this paper, more interested in making a human machine than a thinking machine—his insistence of raising a thinking machine as one would a child and such, I wonder if as we did with chess, we'll make machines that can far surpass ourselves in other narrow scopes mentioned, could we soon develop AI that is tailor-made to make you fall in love?
Good reflections, but you're still thinking in terms of mimicry and surrogates rather than reverse-engineering. What's the difference between T2 and T3, and is ChatGPT passing T2? And if so, is it reverse-engineering real language capacity or just a recombinatory statistical parrot that has swallowed an enormous number of human words and propositions? (Come to the Thursday 10:30 zoom Chatbot seminars: https://uqam.zoom.us/j/83002459798
DeleteSubstitute the weasel-word "intelligence" by "the capacity to do what human thinkers can do" and you'll see what Turing's agenda has turned into in CogSci.
Towards the end of S.4 (Digital Computers), Turing describes Babbage’s Analytical Engine, an entirely mechanical machine designed—but never completed—in the early 19th century. Turing uses the example of Babbage’s analytical engine to explain that, while electricity is certainly useful for fast signaling in a digital computer, it’s not an essential property, and instead, using electricity is “only a very superficial similarity”. Given that Turing believes all digital computers are essentially equivalent, imagining all the machines in the paper as the mechanical Analytical Engine gave me a completely different view on the Learning Machines section at the end and on the Turing test in general.
ReplyDeleteWhen you know a machine is all mechanical parts, it seems easier to say “well, that’s not cognition, it’s just input-output, with no resting state exploration or original action”. But I suppose that’s the point of the TT, right? You don’t know whether the thing giving the answers is a mechanical construction.
I like Turing’s breakdown of arguments. Still, I have a gut reaction that a totally physical Analytical Engine that passes the TT is “thinking” only in a very weak sense of the word. Turing brings this point up at the beginning, in The Critique of the New Problem, in which he briefly poses the objection that a machine might carry out something one could describe as thinking, but which is very different from what we do. But he quickly brushes this off, saying that regardless, if a machine can be made to play the imitation game satisfactorily, “we need not be troubled by this objection”. I’m not exactly sure why he passes this objection by so quickly (and gives a whole section to ESP??). Sure, a machine that passes the TT is satisfactorily impersonating human responses, but it may not have independent internal states, or capacity for original’ action. It could replicate a human response, but so can a parrot, and a parrot still doesn’t understand what it’s saying.
Please read the other student postings and especially my replies to them. There is no game, no imitation; no fooling; reverse-engineering is empirical science. And what is a "machine"? It's a causal system, and includes biological machines, like organisms,
Deleteincluding humans; and biological organs, and nonliving machines such as waterfalls, and chemical reactions, and the solar system; and artificial (human-made) machines, like clocks, vacuum-cleaners, cars, computers, and robots.
And GPT is not just a statistical parrot (as Emily Bender -- whom I invited to the Thursday series but cannot do it this term but perhaps might next term -- thinks). It's not a parrot because it is not just doing echolalia, repeating verbatim what others write and say. It is doing (if anything) "recombinant" echolalia.
But of course many have suggested that recombinant echolalia is all WE do too, even our creative artists and thinkers.
["Stevan Says" that may well be true, but GPT (unlike us) is doing it without either knowing, believing, thinking, understanding, or meaning anything at all. That's all being projected (by US) onto what it says (and what it has derived, as a recombinatory parrot, from its enormous 2021 "Big Gulp" of writing (by us), more than could ever be fit into a single head and lifetime. WE are GPT's homunculus!]
During the reading I got stuck on Lady Lovelace’s objection, more specifically the variant that a machine can "never do anything really new." Turing counters that no man, much less a machine, can be sure of his “original work” since it could be sparked by following “well known general principles” or grown from a seed planted from earlier teaching. This section got me thinking about the links between the Turing Test and free will. If what Turing says about the difficulty in producing original work is true, does that mean everything we do is somehow predetermined? This seems to me a very strong argument against the idea that humans have some kind of God-given natural difference that computers could never be granted. I also found this line of thinking interesting, because instead of advocating for computers having more powers than we assume, Turing points out that we may actually function on their “lower level.”
ReplyDeleteTuring's descriptions seem aligned with the T2 model, as we see in current AIs like ChatGPT While at times ChatGPT can feel machine-like, by adjusting certain parameters, such as complexity and sentence variations, its responses can seem more human. If a future version of ChatGPT were to perfectly imitate human interactions, it would likely be due to advancements in its learning algorithms, similarly to the progression we saw from LaMDA to ChatGPT. However, we might not begin to question if it's truly "thinking." It doesn’t necessarily provide insights into the workings of the human mind; we would still think of it as a better-executed program operating on familiar principles.
ReplyDeleteGiven our familiarity with text-based interactions like with ChatGPT, I wonder what a T3 model might look like. Does the ability of some AI models to recognize and generate images, or voice recordings qualify as a T3 characteristic? Or should its visual processes work like ours, capturing and interpreting light in the same way our eyes do? If an AI system perfectly emulated human-like sensory and motor processes, would we be more inclined to consider it as "thinking"?
T3 is not just T2 plus some visual inputs. It has to be a sensorimotor robot, not just seeing and hearing, but moving and manipulating, indistinguishably from us. Sensorimotor capacity is too intrinsic and essential to cognitive capacity to be bracketed as vegetative function.
DeleteLoved this reading.
ReplyDeleteIn Section 7 Turing says: “The survival of the fittest is a slow method for measuring advantages. The experimenter, by the exercise of intelligence, should he able to speed it up. Equally important is the fact that he is not restricted to random mutations. If he can trace a cause for some weakness he can probably think of the kind of mutation which will improve it.” Would this human judgement not be fallible? And would this judgement lead to the desired emulative outcome? I mean who’s to say. This does have me concerned about the ethics underlying Large Language Models and the firms behind them…to be further investigated
In Section 6.2 Turing argues that the matter of humans being scared of machines thinking is largely an ego based one, that we rank ourselves highest on an intellectual hierarchy and take pride in this position. Ending this section, he offers comfort to those who believe that human souls are especially unique by telling them to consider reincarnation. Funny, considering the end of Section 6.1 he says that past theological claims like these in Joshua and Psalms seemed correct until we gained more knowledge. Cheeky.
Lastly, the end of Section 7 reminded me (specifically the word “abstract”) of both a feeling and thought I had when I first played around with Dall-E and Midjourney. I was initially unnerved and then amazed and then unnerved again; all while thinking “huh…it actually did it.” Reflecting now, I still don’t feel like their outputs are necessarily ‘human,’ just as I don’t feel like a Robotor robot can sculpt as great as Michelangelo. I’m either thin-slicing and *actually* feeling something off, or I am merely imagining the difference because of my human superiority. Nevertheless, I both anticipate and fear the day I am actually duped by AI painting and sculpture.
Section 7 sounds like genetic engineering.
DeleteThe telepathy and reincarnation parts are nonsense, like Newton's alchemy. Noblesse oblige, Brobdingnagian prerogative.
Creativity being so recombinatory, Turing candidates (both T2-wannabes and T3-wannabes) will probably be selliing us art, music, novels and poetry well before they have passed the TT. (But we are pretty astute detectors of the mechanical and derivative, even in human art.
I am quite impressed and touched by the contrary opinions expressed by the author.
ReplyDeleteThis question is actually something I have been thinking about. Especially the emergence of modern technology- ChatGPT, which can be a thinking machine.
There is actually an argument in the text: "Thinking is a function of the human immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or machine. Therefore, no animal or machine can think" . This makes me think about, in human existence, the difference among humans, animals and machines is that human can think. Although I don't believe in any religion, my thoughts are similar to this. Humanity has been civilized for tens of thousands of years because people can think and explore. We have enough patience to study anything, from microbiology to everything in the universe. But at the same time, this process of exploration and research is very in-depth and long, and it is inevitable that if there are thinking machines, maybe human civilization will develop faster, because we can directly ask the machines anything. However, at the same time, humans will also lose their ability to dominate. We will gradually become dependent on machines and slowly be controlled by the machines.
About "machines" (biological, physical and synthetic), please see my replies to other comments. About human dominance over other species, it's partly from greater learning capacity and motivation, but mostly from language (Weeks 8 & 9), which is probably unique to our species. But most of human domination of other sentient species is nothing to be proud of; it's monstrous, shameful and psychopathic.
DeleteTuring replaces the abstract question of machine thinking with a practical game in which an interrogator must determine which of two contestants, a man and a woman, is which solely through written responses. Turing's insight lies in the shift from defining machine thinking in vague, philosophical terms to framing it in terms of observable behavior. I see how he is choosing practicality over abstract definitions, paving the way for the development of artificial intelligence. I don’t think it is very realistic to answer such an ambiguous question. However, is this approach, measuring a machine's ability to mimic human responses, a valid criterion for intelligence?
ReplyDeleteHi Maria! You’ve highlighted a very important aspect of Turing’s approach to the idea of true machine intelligence. He did shift the focus from abstract definitions to more practical observable behavior through the introduction of the turing test. However, to answer your point on if it is or isn’t realistic to answer such an ambiguous question, I do believe that it is as realistic as it can be. Although the Turing test does have its limitations, it is a very valuable starting point in terms of creating a benchmark for evaluating machine intelligence. The test does not address problem solving ability nor creativity, however, I do believe mimicking human response must at least be considered to be a small valid criterion in terms of measuring a machine's intelligence and understanding. As I mentioned in my comment from last week, us as humans have a deep root in mimicking those around us when it comes to acting the way we are supposed to in public along with imitating others' knowledge to then build our own intelligence. I do agree with you however that this cannot be the end all be all when it comes to measuring true intelligence. As AI continues to grow, we definitely need better criteria to understand what it means to truly understand.
DeletePlease read the other comments and especially my replies (especially on mimicking).
DeleteTuring is not moving from the abstract to the practical but from speculation to empiricism.
Can someone explain in what way giving a definition of ‘machine’ and ‘thinking’ yield an impotent way of answering the question ‘Can machines think?’? In some sense I do understand some arguments for a more ‘empirical approach; and ‘human oriented’ test: thinking is an inherently human ability. But in another, a definition of terms, even if not applicable to all aspects of ‘thinking’ and ‘machine’ would seem more scientifically potent and thus yield some more tangible results. Furthermore, ‘the imitation game’ approach to this question could be seen as having a strong subjective element. Ultimately Cognitive sciences are sciences, and in some sense a more theoretical way of approaching this issue would seem more likely to generate trustworthy results.
ReplyDeleteA.M. Turing’s “Computing Machinery and Intelligence” was a truly delightful read. Almost every question I had would be answered by Turing in a subsequent paragraph, all while he seldom tried to establish any speculation as fact, merely stating his hypotheses on the issues he was unsure about. He wrote in a language so easily digestible that anyone can understand, providing numerous analogies and drawing parallels to real-life phenomena in almost every section. That being said, there were some points, especially in his argument from consciousness and Lady Lovelace’s objection, that left me with more questions than answers.
ReplyDeleteAt numerous points in this essay, Turing draws on the fact that machines will not be able to feel and have awareness of their activity the way humans do. However, he disregards this as irrelevant in his imitation game; although I agree that this does not have anything to do with the validity of the imitation game, there is some ignorance of the danger of creating something we don’t understand. In his critique of the argument from consciousness, Turing concludes “I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper”, which leaves me to wonder why we are striving to replicate what we cannot yet fully understand. Even with the child computer analogy, where he writes of punishment and reward as forms of learning, there is no doubt that machine learning can occur through such phenomena, but it can never be entirely analogous to a human’s experience of the two. Punishment can not induce shame, guilt, and anger. Rewards can not induce pride or glory. In some ways the machine’s inability to feel can be a good thing; it is not subject to the biases and prejudices that taint humanity with an unsightly desire for violence. However, it is also the basis of the dystopian belief that “machinery will take over humanity”, as replacing manmade wisdom with a faster, more accurate computation can ultimately replace all human work all while being entirely ignorant of the moral and passionate human psyche that brings people together.
Further, I think there’s another component separating man from machine other than the capacity to feel; and that is the efforts and labours that underlie man-made creation or calculation. One can look at the recently constructed AI art or music, which although can appear impressive, does not evoke the same emotional response and impression as human art. The machine’s creation can be original, and can certainly take us by surprise, but it lacks the human labour that makes us impressed by its invention.
Nonetheless, it cannot be argued that Turing was one of the greatest minds of the past century. It is also interesting, if not scary, how accurate his predictions of machines in the future were. Overall, although I had some questions, Turing managed to answer most of them in this paper.
Good summary and reflections, but please ALWAYS read the comments that preceded yours, or at least my replies to them. Otherwise you just say the same things (and make the same mistakes) as the others. (This is also why you should do your skywriting early in the week, not in the last minute.
DeleteThe discussion of telepathy stuck out to me the most. While I was agreeing with most of the reader’s opinions before this, it reminded me to doubt the writer. It prompted the question on how he can utilize machines not being able to do extrasensory phenomena as a point to justify that machines cannot think, when there is not enough research or data to prove that humans can do it? He seems to contradict previous claims of Turing believing humans can respond to perceptions that machines can't, but now he is claiming that extrasensory phenomena can allow machines to have a vast amount of skill. I’m not sure if I am not understanding his point regarding telepathy correctly or if it truly does not hold a lot of power or evidence.
ReplyDeleteYes, the supernatural stuff is super-silly, but not important one way or the other for Turing's substantive contributions: computation, Turing Machine, Weak/Strong Church-Turing Thesis, Turing Test -- and eventually AI and reverse engineering. (See Newton on alchemy.)
DeleteWhat I gathered from the reading and the skywritings above is that what we’re trying to do is not to answer the question if machines can think because we cannot even do that for ourselves, so really the test is to solve the easy problem. We use the Turing test to reverse engineer humans’ cognitive capacities, as in what most humans can do. If we manage to build a model that is indistinguishable from the average human, then we have succeeded in understanding/explaining human cognition.
ReplyDeleteExactly.
DeleteFirst of all, I don't know if I was the only one who found the section about ESP amusing; I can't believe that Turing took the time to refute this argument and I appreciate the fact that he took it as a serious argument to his claim and came up with a solution (i.e., ''to put the competitors into a 'telepathy-proof room' would satisfy all requirements).
ReplyDeleteMoreover, although I resonated with Lady Lovelace's objection, I was aware that it could be refuted easily as I know that there are AIs out there that learn from themselves.
About Turing's idea of a child machine; I think it is bold of him to presume that we will end up with a machine that can simulate an adult brain if we were to produce a machine that simulates a child's brain and subject it 'to an appropriate course of education'. This assumes that computers learn as humans do which is not the case - perhaps the people working on neural networks will be able to this one day, but I think it's impossible to reproduce the human brain electronically. I believe that we would have better luck trying to understand how the human brain works and then apply the mechanisms to algorithms (instead of creating the same thing digitally). I hope I make sense to you! If not, please let me know and I'd be happy to elaborate here or in person.
We don't know how people learn at TT scale, but unsupervised and supervised neural-net learning looks like a good first approximation.
DeleteI thought Turing's argument from Informality of Behaviour was the most convincing.He argues that the logic of the following counter argument is erroneous.
ReplyDeleteWe can't write out or determine all the rules which determine human behaviour.
We can do that for machines.
So men aren't machines.
I offer support to Turing's dismissal of this logic.
If I am given a machine which was made by someone else, how can I know its so called instructions? It would take time and observation for me to determine its reaction. Better yet, it might not even be possible for me to determine ALL of its instructions, as I would have to test out everything. I believe this is true even for machines with few instructions.
Then, if we compare this to the way we learn about the way humans work, we understand the complexity of the task. Whether it be through sociology, philosophy, cognitive science, economy, all these fields are observing and predicting human behaviour in order to find out these rules.
In the previous example one might even try looking at the hardware in order to understand what the machine is doing. This would correspond to neuroscience in my analogy.
Computation is not the only candidate for the mechanism underlying cognitive capacity (and it's probably on a part of it. What are other candidates?
DeleteTuring discusses the common arguments that he’s heard about why “computer cannot think” and goes about debunking their rationale by using the imitation game and how a computer could succeed at the game. Turing then comes to a conclusion where he says that there is more evidence as to why these previous explanations can be debunked than he has evidence for “computers can think”. His limitations were based on the limits of the technology at the time of writing this article. His best source of evidence comes from how children learn (initial state of the mind at birth, formal education and experiences) and to recreate a child programme and an education process to simulate the adult brain. This adult brain programme can then be tested in the imitation game. One connection that I have missed while reading, is how a machine successfully winning the imitation game equates to “machines can think”.
ReplyDeleteForget the imitation game: What is reverse-engineering, and what is it to pass the TT?
DeleteReverse engineering is breaking down each mechanism with a complete explanation of function. To pass the TT, the machine would have to be 100% indistinguishable from a human 100% of the time for the rest of its life. Is that correct?
DeleteA passage in the text reminds me of the discussions we've had over Pylysyhyn's computational theory (or rather how he came to it). "In considering the functions of the mind or the brain we find certain operations which we can explain in purely mechanical terms. This we say does not correspond to the real mind: it is a sort of skin which we must strip off if we are to find the real mind. But then in what remains we find a further skin to be stripped off, and so on." When Pylyshyn talks about the non-cognitive, or the noncomputational, he is stopping himself at a removed 'layer of cognition', namely what rests under the algorithmic function computation. The black box problem remains: what remains after the layers of logical operation have been peeled away? Emotion, perhaps, for emotions guide our decision making as much, if not more than logical induction. What Turing refers to this the section "Arguments from various disabilities: "Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream […]" According to Lisa Feldman Barrett, emotions are a way to urge us to regain homeostasis, whether on a purely physiological level (ie: hunger means eat, to stay alive) or on a social level (feelings of friendship/affinity is a way to tighten social bonds which ensures survival). Would it make sense to speculate that, in so far as humans have been making, improving and destroying computers, computers have simply not needed emotion as a means to insure survival? Following along with this line of logic, would it make sense to speculate that, given the rise of artificial intelligence, a setting in which computers can program other computers, an analoguous spectrum of emotion might arise in computers, or rather expressions of it (rather we "believe" that the computer actually experiences it, or not).
ReplyDeleteThe origin and function of feeling (not just emotions, also sensations like red and green, or hot and cold) presents a "hard problem" (Week 10) for cogsci and reverse engineering, partly because of the "other-minds" problem, which is that the only feelings you can observe are your own.
DeleteTuring's method is only applicable to the "easy problem" of DOING (the capacity to do the cognitive things we can do, especially the capacity to learn and the capacity for language) because we can observe, hence test, whether our TT candidate can do them.
AI can provide useful tools for humans to use, but reverse engineering is meant to determine what kind of "machines" WE are, and how our brains produce the capacity to do what we can do.
In learning about Turing's quest to define machine intelligence through the "imitation game," it leads me to ponder how even humans may struggle to play this game accurately. Our own biases, preconceptions, and limitations in understanding human behavior can lead to misjudgments. This raises the intriguing notion that AI, with its impartial algorithms and data-driven analysis, might, in fact, have the potential to outperform humans in this game. However, it also highlights the challenges of creating truly human-like AI, as our understanding of human behavior is itself imperfect. The "imitation game" not only challenges the capabilities of machines but also underscores the complexities of our own human intelligence.
ReplyDeleteTuring argues that to answer the somewhat vague question of “Can machines think?”, one should think about whether the identification outputs be the same for two people and a machine (in the “imitation game”), in which passing the TT is a machine doing any computational thing, indistinguishably from a human. I was wondering if the TT is reverse-engineering human cognitive capacity, is it always about observable things? The overall idea of “thinking” doesn’t seem observable, yet there are some outcomes of thinking/cognition that we can observe, for example decision making (I guess it is the output we generate from thinking?) or some neural implications (e.g., prefrontal cortex in fMRI studies).
ReplyDeleteThe TTest is only the test of whether the reverse-engineered candidate is able to DO the kinds of things we can do (learn, reason, talk), indistinguishably from any one of us. DOINGS are all observable. This method does not work for unobservable things like feeling.
DeleteThe way the candidate does what it can do is not necessarily computational, and cannot be JUST computation (Week 3).
The section that I found incredibly engaging in this article is the one related to contrary views on the main question.
ReplyDeleteThe theological objection regarding the arbitrariness of the orthodox view, pointing out the “Moslem [perspective] that women have no souls” had me quite intrigued and disturbed. I found it to be a particularly clumsy short-cut to illustrate the subjectivity of religious orthodoxy from one tradition to another. My background as a Muslim never led me to encounter any source of the kind in any branch and my exploration of the teachings of other major religions was never faced with the denial of the existence of souls for women - on the contrary. Moreover, if T-testing and reverse-engineering can only be based on what the thinker can DO in cognitive science, why lead a discussion about the soul and theology?
Another point that had me really interested is the objection to the argument from consciousness. The solipsist point of view that asserts that the only thing one can be certain of is the existence of their own mind or consciousness reveals very smoothly the “Other Minds” problem. Because there is no definitive resolution to the problem, Turing is right in pointing out that “the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking”.
Ada Lovelace, the person who wrote the first computer program in history, made a point that machines cannot learn or accomplish anything new beyond what they are programmed to do. Surrounded by the impressive technology we have today, in the form of LLMs such as Anthropic, ChatGPT, and Bard AI that are evidently capable of learning, it is amusing to read Lovelace’s bold claim that this type of technology could never be developed. In hindsight, it is easy to look back at her conviction and laugh, but we must be reminded the limited extent of computers and programs in her era. It’s no wonder she thought the idea impossible, that a machine could learn how to perform tasks beyond those explicitly set out in the program instructions. On the other hand, in his refutation against Lovelace’s claim, Turing contends that a learning machine can absolutely be accomplished and that its merely a matter of suitable developments in the areas of programming and engineering. When I pause and really think about the prediction Turing is making here, I am fascinated by the foresight he had. To make the claim that machines could learn, at a time where the most basic of programs and computational theory were still being produced, is an impressive leap to make but nonetheless an accurate one. I would love to hear their thoughts and insights if they were alive to see the technology we have today. I imagine Lovelace would be pleasantly surprised to find her opinions proven wrong, whereas Turing would be proud to see his predictions come true.
ReplyDeleteThe quote from Professor Jefferson's has resonated in me: "Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants." This statement can be related very easily to the advent of Chat GPT today, and all the ethical debates coming with it. Professor Jefferson would be surprised to learn that nowadays, it is possible to produce sonnets or concertos with Chat GPT, DallE2 and other forms of AI that are as good if not better than what some important writers and composers have written. However, as he eventually mentions, what distinguishes AI from other human beings is the ability to feel, the ability to have an actual sensation and not simply reproduce/copy the product of this sensation. And I agree with him. But no one will ever be able to know whether these robots are able to feel anything or not because no one is in the mind of these machines, a problem known as the other-minds problem.
ReplyDeleteI think we commonly misinterpret the purpose of the TT, like when i first read of it, I thought it tested how 'superior' can a machine be in comparison to human intelligence. But simply put, the TT just portrays to what extent can a machine function to which it reaches a point where it imitates humans at a level by which we cannot detect it. But then, I was wondering can we apply the TT in terms of the easy problem? Can it help us understand the computational and neural mechanisms by which human cognition functions?
ReplyDeleteAfter finishing this course, I come back to answer my ever so naive question! The Turing Test helps us address the easy problem by allowing us to reverse-engineer the mechanisms that underlie how and why we do, so that we can at one point replicate it in hopes of getting closer to creating a fully functioning autonomous AI system (a T5 robot)
DeleteI think the second part of the article, where Turing compares a man acting as a machine and a machine acting as a man, is very interesting because a machine would be faster and more accurate in the computations it would make. Does that make machine more intelligent than a person even though a person has created that machine? I guess for that we would have to define intelligence. IQ tests define intelligence through a series of questions to be answered in a limited amount of time. Let's say we give the IQ test to a person and a machine, without knwoing which result belongs to the machine, would we be able to tell? Or maybe would an EQ test show how a machine is not a person? I think that with the current AI's database, it would be able to achieve a really high score on both tests but that would be suspicious as well.
ReplyDeleteIn the Argument of Consciousness part, it is interesting to me how consciousness is related to feeling emotions. This is something that truly fascinates me. Could we possibly build a machine that can "feel"? The feelings that we feel are based on stimuli and chemicals in our body. Then we act on these feelings based on our thoughts. I don't know if anyone's into playing video games but highly recommend "Detroit: Become Human" It is an amazing game both visually and in terms of its story about AI robots gaining "consciousness". And I guess the next question to ask would be: if we were able to build machines that can feel, would that also prove the "Heads in the sand" objection?
Also reading about the disabilities of the machine especially lacking a sense of humor made me think how we think of jokes, why we make jokes and why we laugh? I'll do some more research about this and add to the reply section of this comment if I find anything.
Lastly, here is a link to fun little video where they apply the turing test (sort of) to chatGPT.
https://www.youtube.com/watch?v=bKPP20rvp3s&ab_channel=Jubilee
I'm not sure what is meant in (3) mathematical objection when he says that one computer may be able to answer a question that another computer is unfit to answer. I think he might be speaking in reference to what I know as the "no free lunch' phenomenon" by which machines learn best when trained on a limited basis of knowledge. Secondly, I liked his statement that "can machines think" is a question too meaningless to be discussed and then proceeded to predict that our vocabulary will evolve to make room for "machines" and "think" to exist without contradiction. I thought this was funny especially since he ends up considering the functional aspect of this question, rathe than the question
ReplyDeleteI also found it very interesting that he predicted that the vocabulary surrounding "thinking" would eventually progress to include machines. I even found it hard myself to kind of leave behind my own preconceived notion of what thinking means, which did not necessarily include machines, but it's fascinating to see that his prediction was correct and that these conversations are becoming more and more prevalent.
DeleteAt the end of the reading, the idea of a learning machine likened to a child was proposed. Considering that this was written over half a century ago, I was astonished by how Turing’s predictions were on the nose in regards to developments of AI. In terms of a machine’s teacher being ignorant of what is going on inside, this bears resemblance to the black box phenomena of many AI models we see today. He was even on the nose about to what extent resultant intelligent behaviour deviates from the initial programme, as well as its certainty (and therefore accuracy/ reliability).
ReplyDeleteIn this reading, the idea that machines are not capable of creativity was stated. I agree with this since a machine’s knowledge is based on human labor. However, there is the saying that “life inspires art and art inspires life.” Can we truly say that people produce original works? There often seems to be inspiration coming from something previously done. The Turing test has also been explored in the entertainment industry, whether it be a human falling in love with AI in the movie Her or Jubilee’s YouTube video 6 Humans vs 1 AI where people have to guess “who” the AI is.
ReplyDelete
ReplyDeleteIn "Computing Machinery and Intelligence", Turing brought up the question "Can computers think?". Because thinking itself is very ambiguous, he steered away from this and focused on whether machines could have the capacity to behave indistinguishably from a human. He introduced the Turing test as a way to answer this question. The Turing test was not a measure of intelligence because who is to say what truly makes something more intelligent than another thing. Rather, it was a way to show that through computation, machines could become indistinguishable from human cognition. One thought that came to mind was what would happen if a machine was the interrogator? Would the machine be able to distinguish between the human and the other machine at the same rate as the human interrogator?
Sorry for the late response, prof. Here is what I think and pondered:
ReplyDeleteIn the paper "Computing Machinery and Intelligence," Alan Turing addresses the question of whether machines can think through the Imitation Game. Focusing and harnessing only the computer's 'intellectual' ability, Turing believes that with enough storage and the right program, the computer will one day pass this game. Turing in later parts addresses theological issues and Godel's theorem, but I won't jump too deeply with the limited post size.
In light of Turing's work, I would like to be indecisive and decide that the question of whether machines can truly think remains indeterminate. The 'imitation game' questions us to consider the essence of thought and what machines are capable of. Nowadays, there are plenty of computer AI and machines that pass the Turing test, showing that with enough data and rules, machines can mimic human-like responses, but this is a simulation, not 'genuine' thought. While machines excel in tasks, as we learned in previous classes, experiences and the boundaries of consciousness remain elusive. Turing's insights help us appreciate the distinction between mimicking intelligence and possessing cognitive faculties. So, as we, humans, always continually change, the imitation test also needs to be rethought for current technology. I believe that until human-like machines become so integrated into our society that they become indistinguishable, maybe that is only when 'people' agree that they exhibit consciousness and emotions. That may not be so far as seen with Turing's era; his thoughts still play an important role today despite the differences in technology, and I think it remains true today.
Hopefully, I answered in an understanding way. 😅
I meant, the Turing test instead of imitation game.
DeleteTuring posed the question, "Can machines think?" and rephrased it more appropriately as the Turing Test. I find it surprising that most objections do not attempt to address the question within Turing's specified constraints. Instead, they object to Turing's conclusion based on their own interpretation of "can machines think," rather than focusing on the Turing Test.
ReplyDeleteBoth the theological objection and the "Head in the Sand" objection appear to completely disregard Turing's restrictions and dismiss the entire question due to their preconceived assumptions. The arguments from consciousness and various disabilities seem to highlight arbitrary human qualities and use their absence in machines to assert that machines cannot think.
It appears that opposing views are reluctant to utilize the imitation game as a basis for discussion, and I struggle to comprehend why. Is there a critical flaw that renders it meaningless to consider?
My question is: Would Turing have agreed with the Strong Church-Turing Thesis in light of his work on the Weak Church-Turing Thesis, which posits that a Turing machine can mimic the behaviors of other machines? Since the Strong Thesis extends to any physical system and his student Robin Gandy advocated for this idea, I propose that discussions with his student may have influenced the development of his ideas concerning Turing machines in general.
ReplyDeleteI noticed my comment never showed up so I’m reposting here! This week’s reading on Turing’s “The Imitation Game” showed me a new perspective on machine thinking: instead of figuring out whether a machine can think like a human, it challenges us to see whether it can impersonate a human. In the theological objection section, it is stated that “Thinking is a function of a man’s immortal soul. Hence no animal or machine can think.” Many studies show that animals have a complex mental capacity and do have conscious thought, although it is different from how humans think. This made me think since machines are argued to be currently incapable of human thinking, can they replicate the unique thinking patterns of animals (who are stated to not have the same thinking processes as humans)? While machines may not replicate human thinking, could they imitate the specific cognitive processes seen in animals?
ReplyDeleteBased on the text and previous discussions, the argument that animals and machines cannot think is refuted and irrelevant in terms of what CogSci is trying to do. TT cannot be based on constructing a soul, but on what we can observe and therefore do, via reverse-engineering (have not provided anything new here just laying out a foundation). Since the criteria to pass the TT is for a machine to be indistinguishable from humans, a machine that is built to have the capacity of an animal, let’s say a rat, would have to fool a rat that it is its own kind. We can certainly feed a machine rat algorithms to produce behaviors we observe from other rats and make it squeak like one. We could implement sensorimotor systems that are unique to an animal to try and give it the capacity to communicate with their own species (such as integrating echolocation for bats) and respond to stimuli. However, since we cannot understand the thinking and feeling capacities of animals, I wonder if it would be even more difficult to replicate the cognitive capacities of animals than that of humans.
DeletePrior to this reading I understood the Turing-Test was used for the purpose of determining if a machine could pass for human, thus being an indicator of its 'intelligence' or its sophistication. However, after the reading and skimming through the other sky writings it became clear that it could also be used to better understand our very own inner workings as humans. What we classify as behaviour that gives a machine 'human like properties', ultimately convincing a human of its humanity, speaks volumes to what kind of machines we are. In my opinion, in this context the TT is better suited to answer the question of what kind of machines humans are then to understand if a machine is capable of imitating us effectively or not.
ReplyDeleteIn the reading, the author states that Digital machines too, can have souls, if God sees it fit to give it a soul like he can give a soul to an elephant if he sees fit (in the Theological Objection part). However, I am not in agreement with the author’s theological objection to the statement, “Thinking is a function of man's immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.”. Digital machines are constructed by humans. An elephant or any other animal cannot be constructed by humans, but it is born naturally without any human-interference. Therefore, we can never know whether animals have souls like we do. They may have souls, but there is no method to prove it. However, in the case of machines, since we are the creators, we are aware of what it is made out of, and can predict the options of outputs that the machine will give (ex. its reactions to various situations), since we are the ones that inputed those options. Thus, I believe that although machines can mimic human actions indistinguishably with larger storage capacities to have all the possible inputs of appropriate human reactions to all situations , they cannot fully become like us or other animals with souls and think independently. Even if God was to give a soul to the digital machine for it to be able to think on its own, there is no clear method that can be used to identify it, since we do not have a definite formula in identifying what a “soul” is and how to know if one has a soul or not.
ReplyDeleteIs it possible that a machine is able to imagine what its like to feel because it understands what to do and why humans would feel a certain why but they are not actually themselves able to feel? But are only able to IMAGINE what it is like to feel? And so in that sense it would appear as if they were able to have emotions. But they are just mirroring what a human brain would do.
ReplyDeleteI think the difficulty to create a machine that can imagine is the same as creating one that can feel. To be able to imagine means that the machine will have to create a mental image of something that is not actually present. However, does a machine have a mind at all for it to do so? Even if a machine can imagine feeling and can pretend that it can feel, it doesn't change the fact that it cannot actually feel, and it doesn't make the machine any more human.
DeleteThis week's reading brings me to the reading of 1b, "Can machines think?". Based on last week's reading, I think the answer is no. The ability to think involves computation indeed , from the most basic mechanical structure to the silicon chip, which is based on logic gates that can only complete the computation part. Cognition, along with ability to think, involves more than computatrion. Turing considers that “instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one that simulates the child's?” However, in the existing neural network technology, the machine still needs a large number of input and output tokens for training, while humans do not need a large amount of pre-training. Therefore, the analogy between machine and child is implicit and requires only a small amount of stimulus activation from abilities brought about by evolution and genetics. The learning process of machines is limited, while human learning methods and genetic skills are different and may cause mutations. Individual’s mental states can never be reproduced or used to create a common model, so from my perspective, machines do not, will never, and need not replicate human evolution. Instead, the functions of machines should be focused on expert tasks in certain fields rather than producing “soul-like machines." This is from a genetic and biological perspective.
ReplyDeleteI found Turing's suggestion for developing a “learning machine” to be remarkably similar to how modern machine-learning neural networks function! Turing breaks the problem of generating a thinking machine into making a “child’s brain” and providing it with a specific education or conditioning. Infants' brains are quite literally a flexible tangle of neuronal connections which are pruned into stable and adaptive patterns for symbol manipulation as the child matures. This is remarkably similar to how machine-learning neural networks function, which mimic these processes of Hebbian learning and synaptic pruning to generate efficient means of symbol manipulation. The fact that Turing managed to foresee this sort of technique for developing machine learning is astounding to me, and makes me wonder what modern day predictions about the future directions of AI will turn out to be remarkably accurate 50 years from now…
ReplyDeleteIt is quite interesting, considering our modern perspective, to see what concerns he thought the public would want addressed, in particular the theological one, the "Heads in the Sand" one, and the "Extrasensory Perception" one. As much as these concerns seem outdated, to most people today, I would guess that, at the time of Turing's writing, many of the other concerns were not in the mind of most of his contemporaries, and yet were added to tighten the strength of the argument. The lack of jargon and weasel words, and the conceptual clarity with which he communicates these ideas is also a testament to his genius.
ReplyDeleteIt is also fascinating to see the ways in which Turing's predictions failed to match reality. He predicted such minuscule storage!
Moreover, I found his description of a machine that could learn, despite having changing rules by having consistent "meta-rules", to be interesting. The rules of learning can stay, but the coefficients with which the machine considers different actions can vary according to these more fundamental rules. This indicates, in my view, the idea that a "dynamic" learning process can be a very straightforward computational algorithm, as with modern deep learning. We may additionally imagine machines, and I am sure some exist, who instead have meta-meta-rules for learning or modifying its own meta-rules, so that the rules can thereafter be discovered and refined. One ought not get oneself lost in a tower of recursion, however; meta-meta-meta-meta-meta-meta-rules serve little purpose.
Turing claims that “an important feature of a learning machine is that its teacher will be ignorant of what is going on inside, although he may still be able to some extent to predict his pupil's behavior”. It seems to me that Turing believes that we needn’t concern ourselves with what is actually taking place within the machine, or the possibility that what the machine does may be described as thinking but is actually very different from human thought. At the end of the day, does it really matter what is going on inside the confines of the “black box” that is the machine? So long as one universal Turing machine is suitably programmed so that it can do anything a human can do (whether everything a human can do is suitably programmable is a different question), the answer to that question is a categorical no. If the machine can do what a human can do, it shouldn’t matter whether what the machine does internally is identical to what a human does internally. Besides, the people I interact with every day can do everything I can do, but I cannot be absolutely sure that their brains are doing exactly what mine is doing, or that they are not machines themselves for that matter. The point is, it doesn’t matter what a Turing-Test passing entity is doing, so long as it does.
ReplyDeleteReupload - Turing’s paper provides an excellent foray into many of the different arguments against a thinking machine, and how each of these arguments may be countered. I was especially interested in the argument from continuity in the nervous system. As I understood it, this argument states that the nervous system, being non-discrete in the outputs that can be produced (depending on the input), is noncomparable to a discrete state system such as a digital computer. I wonder if a. continuous is an appropriate synonym to nondiscrete in describing nervous system outputs and b. whether this continuity could be replicated or reflected in a discrete system that produces outputs in very small increments. For example, if we think of a more simple example of a motor output device that is meant to reflect arm movements of a human, if you were able to control the reaching movement of the arm to the level of micrometers, would this be a suitable analog?
ReplyDeleteRegarding the "Theological objection", I want to comment that often when I tell my friends that I am minoring in CogSci with a stream in philosophy, they ask me whether I think there is such thing as a "soul" or something along those lines. I have always had the same response: only I can know for sure that I have something we could call a "soul", because only I can feel the things that I feel when I do the things I do, and ultimately that's all my soul is: my own privileged experience. It's very interesting to put this in the context of this course and to see how many weasel words can be reduced to the same thing ("soul" = "sentience" = "feeling"), and how many "questions" and "problems" can be obviated by realizing that it is simply the words we have prescribed to things that creates these artificial problems.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteFrom what I understand, strong equivalence is when a machine is the closest it can be to the human mind in all aspects: solving problems, linguistic understanding, creativity, and maybe consciousness(?), error(?), which can be tested using the Turing Test (T3, T4). In contrast, weak equivalence is when a machine can only create a simulation of human-like qualities. This would mean that the machine would have no idea about the internal state of the system but would be concerned with mimicking output based on the input. Learning about strong and weak equivalence reminded me of ChatGPT and how it responds to prompts. I am definitely not a ChatGPT expert, but based on my experience with it, I believe that ChatGPT is a good example of weak equivalence in AI.
ReplyDeleteI think Turing would be considered a computationalist, based on his idea of the Turing Test. From what I understand, computationalism is widely accepted within AI because of its logical way of categorizing large amounts of information, but I am also interested in doing my own research regarding what types of criticisms/counterarguments have been suggested.
I find myself particularly intrigued by the concept of the Turing Test and its implications on our understanding of machine intelligence. Turing cleverly shifts the debate from the philosophical realm of 'thinking' to a more practical ground, where machines are judged by their ability to imitate human behaviour. This approach, while groundbreaking, raises crucial questions for me: Does successfully imitating human responses truly equate to intelligence? Or is there something more to intelligence, perhaps linked to consciousness and subjective experience, that goes beyond mere behavioural mimicry? Turing's exploration here is not just a technological marvel but a philosophical challenge, pushing us to question the essence of intelligence and consciousness. As we advance in AI and machine learning, Turing's insights remain relevant, challenging us to confront the ethical and philosophical dilemmas posed by machines that might one day replicate or even exceed human cognitive abilities.
ReplyDeleteTuring suggested that we may examine the topic "can machines think" by rephrasing it as "can machines pass the pen pal test?" By doing so, he put less emphasis on the distinction between discrete systems and dynamic systems (such as the brain) and more focus on the language performance of the computer.
ReplyDeleteAs Prof. Harnad emphasized on the class, the goal of turing test is not “simulating”, but testing the capacity of computation. It aims to show computation can be indistinguishable from human intelligence, but it doesn't necessarily mean that cognition is computation. That’s why, according to Prof. Harnad, Turing is not a computationlism and I also agree with this idea.