Monday, August 28, 2023

1a. What is Computation?


What is a Turing Machine? 
Computation is Symbol Manipulation 
What is a Physical Symbol System?


Optional Reading:
Pylyshyn, Z (1989) Computation in cognitive science. In MI Posner (Ed.) Foundations of Cognitive Science. MIT Press 

Overview:  Nobody doubts that computers have had a profound influence on the study of human cognition. The very existence of a discipline called Cognitive Science is a tribute to this influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is --- or at least on its -- essential character, as this pertains to Cognitive Science. In this essay I will attempt to comment on both these questions. 



Alternative reading for points on which you find Pylyshyn heavy going. (Remember that you do not need to master the technical details for this seminar, you just have to master the basic ideas, which are all clear and simple.)

Milkowski, M. (2013). Computational Theory of Mind. Internet Encyclopedia of Philosophy.


Pylyshyn, Z. W. (1980). Computation and cognition: Issues in the foundations of cognitive science. Behavioral and Brain Sciences3(01), 111-132.

Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT


130 comments:

  1. For this week's 1st reading, it took time for me to understand the mechanism of the Turing machine. From my point of view, it works based on pure deductive reasoning, but induction and abduction occupy the more significant portion of human reasoning. Meanwhile, it seems like that these words just summarize what we have observed so far, so that their reliability is still questionable.

    Also, the way of the Turing machine on information storage, as with the computer, is quite abstract (at least to me) since there are a lot of mathematical symbols. Due to this, it is questionable whether we discover how our brain works in thinking through this artificial mechanism. However, maybe our brain only has a simple mechanism but with a rapid enough processing time.

    In short, I really appreciate the reading of section 1a, which raises my lots of thoughts, while there are other confusions in my mind. I hope it will be solved in further lectures and readings.

    ReplyDelete
    Replies
    1. It’s much simpler than all that. Computation is the manipulation of symbols on the basis of rules (algorithms) that operate on the (arbitrary) shapes of symbols (not their meanings, if they have any). A computer (Turing Machine) is a machine with a finite number of states. It can pass from one state to another depending on what state it is in now and what symbol it reads on its tape. The only things it can do are: (1) read symbol, (2) write symbol, (3) advance tape, (4) change state, and (5) halt. For example, “read symbol and if it’s 1, halt; if it’s not 1, go back into this same state and read the next symbol.” Look on the web for examples of less simple algorithms until you get a sense of what computation is. (You’ve been doing it in math and logic courses without noticing that that was all it was. That’s what Church, Kleene, Post, Turing and many others noticed in the 1930’s when they tried to describe what mathematicians were really doing when computing.

      Delete
    2. NOTE TO EVERYONE: Please read the other commentaries in the thread, and especially my replies, before posting yours, so that you don't just repeat the same thing.

      Delete
    3. **BLOGGER BUG**: ONCE THE NUMBER OF COMMENTS REACHES 200 OR MORE {see the count, at the beginning of the commentaries] YOU CAN STILL MAKE COMMENTS, BUT TO SEE YOUR COMMENT AFTER YOU HAVE PUBLISHED IT YOU NEED TO SCROLL DOWN TO ALMOST THE BOTTOM OF THE PAGE and click: “Load more…”
             ________________
                Load more…
             ________________
                    ——
      After 200 has been exceeded EVERYONE has to scroll down and click “Load more” each time they want to see all the posts (not just the first 200), and they also have to do that whenever they want to add another comment or reply after 200 has been exceeded.
      If you post your comment really late, I won’t see it, and you have to email me the link so I can find it. Copy/Paste it from the top of your published comment, as it appears right after your name, just as you do when you email me your full set of copy-pasted commentaries before the mid-term and before the final.
                    ——
      WEEK 5: Week 5 is an important week and topic. There is only one topic thread, but please read at least two of the readings, and do at least two skies. I hope Week 5 will be the only week in which we have the 200+ overflow problem, because there are twice the usual number of commentaries: 88 skies + 88 skies + my 176 replies = 352!. In every other week it’s 2 separate topic threads, each with 88 skies plus my 88 replies (plus room for a few follow-ups when I ask questions.

      Delete
  2. To be completely honest, being a computer science major, I thought that I would understand the “What is a Turing Machine?” reading better than I actually did. Having been familiar with coding or “computation”, I thought I would somewhat understand the idea and the mechanism of the Turing machine. However, it seemed really complicated to me, and I could not visualize how it works. I then decided to find and watch a video about it to get a better understanding.

    From the “What is Computation” reading, one of the things I took away was that computation is subject to change, which then brings up the idea of algorithms and how they affect computation. If we consider computation to be a combination of small steps, then I am wondering if we can define algorithms to be the process (if considered a weasel word, I mean “the act” by saying process) of constructing a computation. I think what I got confused about this reading was the definition if an output being well-defined.

    After reading the “physical symbol system”, I was wondering if we could combine two of the readings where we can associate the “knowledge level” to be the input and output of computation, and associate the “symbol level” with all of the steps to reach the output from the input.

    ReplyDelete
    Replies
    1. Hi Selin,

      If I understand correctly, you are confused about the definition of ‘outputs’ from the “What is Computation” reading. How I see it is that the actual output differs based on what is computing the function and the input. For example, as mentioned in the reading, ‘an MP3 player computes a function whose input is a compressed song and whose output is an audio waveform’. A calculator, on the other hand, takes numbers as input and outputs numbers. I hope this helps!
      Please feel free to reply to my comment if I misunderstood your comment or if you would like me to give other examples!

      P.S.
      Re: your comment about algorithms, I believe an algorithm is equivalent to a procedure which is defined as ‘a specific method for determining an output value from a set of input values’ in the reading. If that is what you meant to say when you said ‘process’, I second your thought!

      Delete
    2. Hi Aashiha,

      Thank you for explaining, and with examples! I was confused exactly about that but now I understand what it meant. And yes, that is what I meant by using 'process'!

      Delete
    3. Never mind the details. Rule-based manipulation of symbols based only on their shapes. The trick is to get the right rules (algorithms) that will give the right outputs. Computation is like a recipe for baking a (vegan) cake.
      Or the recipe for solving quadratic equations;
      aX**2 + bX + c = 0

      What is the algorithm for solving that, and what steps would a Turing Machine take to solve it? (You all know that from Algebra 1).

      Delete
  3. “Computation is Symbol Manipulation” explains the essence of computation very well. Conery defines computation as “a sequence of simple, well-defined steps that lead to the solution of a problem” to his non computer science major students in a kid-sib way. One of the most thought-provoking points in the paragraph is the question of whether computation always involves multiple steps or if it can be a single-step process. He argues that even if there is only a single step, such as functions that use memoization, it involves a computation. Storing results in a table eliminates redundant computations and saves time for subsequent operations. What may appear as a single step actually consists of a series of smaller steps anyway.

    ReplyDelete
    Replies
    1. That's it: rule-based shape-based symbol manipulation, following rules that lead step by step to the correct output.

      Delete
  4. In “What is computation”, the chapter “Computational neuroscience” is the most memorable to me. Previously, I thought that computation was “tied to numbers, acronyms, punctuation or syntax” as found in programming languages, and that they were carried out almost exclusively by computers. However, the following example on cochlear implants helped me push the boundaries of that restricted view of computation: “In some experimental treatments, such as cochlear implants, we can actually replace damaged components with computing systems that are, so much as possible, behaviorally equivalent.”

    At first, it seemed like cochlear implants were devices unrelated to computers that picked up sounds and amplified them for the patient. However, despite their simple function, they do carry out computation through representations that I wasn’t used to associating with computation. The input is coded in sound waves, while the output is coded in electric signals, which stimulate the auditory nerve directly and avoid damaged parts of the cochlea. As they carry out a similar function to the cochlea, when working correctly, we can say that cochlear implants are a computing system behaviourally equivalent to the cochlea.

    ReplyDelete
    Replies
    1. Good points. But as we’ll see more and more clearly, some things, or parts of things, can be performed or even replaced by symbol-manipulation, but others cannot? What can and what cannot, and why? This will become one of the major themes of this course.

      (You can use the cochlea, its inputs and its outputs, as an example, but think of others too.)

      Delete
    2. I believe in this case the distinction between what can and cannot be replaced by symbol-manipulation relates to our simulated ice cube example. While features of the ice cube can be symbolically encoded, this simulated ice cube will not melt, nor can I use it to cool down my drink. Similarly, although the cochlear implant may use symbol manipulation to convert sound waves into electrical signals, it also relies on its position in the inner ear and its physical contact with the auditory nerve. It would not be possible to symbolically encode this, given that it relies on something physical and tangible, just as I cannot symbolically encode the coldness of the ice cube.

      Delete
    3. This is in response to the question about what can and cannot be performed by computation alone. One thing that can use computation (input>algorithm>output) alone is the physiological withdrawal reflex response. If your pain sensory nerves detect sudden pain, your spinal cord responds by pulling the limb away from the painful stimulus. So in this case it would be input=painful stimulus, algorithm=if painful, then the spinal cord activates motor neurons to fire, output=pull limb away from painful stimulus. A robot could be designed to perform the withdrawal reflex with computation alone (input=high electrical frequency in an isolated area, algorithm=if high frequency in isolated area, then activate motion, output=pull limb away from stimulus. On the other hand, the feeling attached to the withdrawal reflex cannot be replicated computationally. When people get vaccinations, it is a sudden painful stimulus, but people do not pull their arm away from the nurse because they added meaning to the stimulus. They can inhibit the reflex because they feel the pain is in a safe context.

      Delete
  5. My initial skywriting was a question asking to have the following point cleared up: that I was confused about the fact that Conery writes that “not all computations are defined by algorithms” but then later states that what he calls “indefinite computations” - computations not defined by algorithms - are also controlled by algorithms. However, as I was writing out my thoughts, I realized that the point I was missing was quite simple - of course, being controlled by an algorithm does not entail being defined by one.
    I’m a bit embarrassed by the fact that I looked over this, but just in case there was someone tripped up by the same thing, I thought I’d post this to the blog in the hopes that this could be of some use.

    ReplyDelete
    Replies
    1. No need to worry about questions like these, about computability and decidabaility (i.e., what can and cannot be solved by algorithms?). All you need to understand is what computation is: What does a Turing Machine (or computer) do? and how? Answer: manipulates symbols by following rules based on their shapes.

      Delete
  6. I found John S. Conery’s "Computation is Symbol Manipulation" the most interesting out of the three readings for 1A due to the parallels between this description of computation and our discussion of cognition in class. Conery explains that abstraction is the process of breaking down a computation into its sub-steps, but he cautions that abstraction usually stops once the step is small enough to be carried out by a machine (Conery, 2010). This point reminded me of Professor Harnad’s question of who our third-grade teacher was. When most people try to describe a thought process, they do not abstract it very far. Instead of explaining the neural processes, people can only explain what it feels like to have the thoughts. In the "Physical Symbol System" reading, the appropriate level of abstraction for the task and audience is discussed. Perhaps we are capable of abstracting deeper into our own thought process, but we have learned that that level of detail is not often what is necessary when we are asked to describe what we are thinking. I think that the similarities and differences between the computations that the brain makes and that of a computer are very interesting, and I am looking forward to learning more about this during the upcoming lectures.

    ReplyDelete
    Replies
    1. Good points. It will turn out that computation is so powerful that it it can abstract the rules and use them to model just about anything in the world. But a computational model of a thing (e.g., an ice-cube) is not the same as an ice-cube. How? and why not? (This is called the Strong Church/Turing Thesis. The Weak Chrurch/Turing Thesis is that computation – i.e., what a Turing Machine does -- is what mathematicians do when they compute.) More about this later.

      Delete
  7. Computation, in its essence, is a process of taking in input and generating output. It is similar to the human brain except that the latter is continuous, as Conery states in his paper. If reading the articles is the input, and writing comments is the output, I would like to take a route of “higher abstraction,” contemplating the reason we are assigned these readings.

    Thus, in my understanding, the readings encourage us to examine our intelligent actions and better grasp our thinking process. The order of the required readings regarding computation ascends from lower to higher abstractions. However, when thinking in terms of cognition, simply considering the lower abstractions is often not enough to understand how thoughts appear through cognitive functions. Perhaps one day we can train our brains to store the steps in a thinking process and inscribe it into something that can be communicated among people using language tools.

    ReplyDelete
    Replies
    1. Good reflections, but it will turn out that understanding and explaining cognition will not be just a matter of moving to higher and higher levels of abstraction. But while we’re at it: what does “abstracting” mean?

      Delete
    2. The readings state abstractions are combining sub-steps in computation into one big step. I understand the three-level organization in cognitive architecture as a higher level of abstraction.

      What I am think is, if the human brain could keep track of every step it takes, like computers do, we could study how thoughts are generated more effectively. But then I am lost again. How do we universalize atomic operations of the brain? Does this ultimately goes down to the action potential of neuron activities? Yet, none seem enough to even begin the study of cognition

      Delete
    3. Forget about the brain for now and focus on what computation is, and when it can do, and how.

      Delete
  8. I found myself wondering through all three of these readings how to define the process by which one set of symbols becomes another as meaningful or not. It was my understanding from the readings that computation is this process of turning one set of symbols into another set of symbols or put another way taking one set of symbols in and outputting another set of symbols which are in some way related to the first. In Computation is Symbol Manipulation it is stated that "Clearly there must be some structure to the computation, otherwise one could claim any collection of random symbols constituted a state, and any two unrelated states could form a computation" but how do we differentiate between two related and two unrelated states or differentiate between a 'collection of random symbols' and a state. Does it have to be able to be accomplished via algorithm? In which case does it matter if the algorithm ignores the first set of symbols and always outputs the same state no matter the input? Can we call that computation? If not how do we restrict what algorithms perform computations and which don't? There is some reflex in me to say that to call it a computation there has to be a purpose, a defined goal but who defines the goal, the Turing machine doesn't care what it outputs it just follows the commands so it is performing a computation but without intent. Is it the programmer who defines the goal? Then can we say that it is the machine that is doing the computation or does it require both machine and intelligent agent? If so, if the agent leaves does it cease to become computation, perhaps, but is a computer calculating of pi long after humans are gone not computing? These are just some thoughts that came up when I did these readings which I suppose boil down to 'what is the role of sentience in computation?'.

    ReplyDelete
    Replies
    1. A machine can execute the algorithms on the symbols. Computations are serial, executed one after the other. Some of them are well-formed, like “2+2=4.” Some are ill-formed, like “24=+” and meaningless. And some are well-formed, like 2+2=5, but incorrect. People design algorithms, but we are not interested in algorithms that are meaningless, or wrong. So an algorithm that does not lead to an output that can be interpreted as meaning something, and that something is correct, may be an algorithm, but it is of no interest to us, like a game that has rules, but no interpretation (e.g., as winning or losing).

      Algorithms are just syntactic, meaning shape-based and formal. The ones we are interested in do have an interpretation (semantics, meaning), but the interpretation is in our heads, not in the computer or the computation (because computation is based on shape-manipulation, not meaning). In cognitive science, we are interested in what is going on in our heads. We are looking for a causal explanation of how our brains are able to do the kinds of things we can do. Computation is a candidate for being that explanation, but there are other candidates. (What might they be?)

      Delete
  9. While reading the page under "What is a Physical Symbol System?", the modification of "physical" to the concept of a symbol system stuck out to me. That particular terminology made me wonder about the existence of a non-physical or more theoretical symbol system. Although in the chapter, the author writes "Computers and human minds are examples of physical symbol systems", based on what I've learned in other psychology courses based in cognition, I wonder if a supposed "theoretical" symbol system would be more accurate to how our minds function to process information because we are capable of imagining fiction and considering the future and processing emotions, for which there may be no physical object in the real world for reference. It is also unclear in what manner we store references to the real world and therefore unclear how that symbol system would function. That being said, there are a number of programs written that mimic human behavior (most notably in language models), and we *do* reference real-world things in our thoughts, so our mind certainly does function somewhat as a physical symbol system. Perhaps human minds, in additional to the physical symbol system, are also capable of processing with a "theoretical" symbol system, and this is what sets us apart from computers and artificial intelligence.

    ReplyDelete
    Replies
    1. Hi Zoe,

      I found your question on the nature of symbols, as physical or “theoretical,” very interesting. From my own understanding of the physical symbol system, I think that some of the "theoretical" symbols you've described, such as the representation of fiction and the future, could be considered physical symbols. The assigned reading on the physical symbol systems describes a physical symbol as both "physical symbols that are part of the real world," as well as symbols which "may be internal to computers and brains." Therefore, I think some of these "theoretical" symbols could be considered physical symbols, in that they are able to be considered and manipulated by a physical symbol system, such as the human brain. From my understanding, physical symbols are labeled physical insofar as they are manipulated and used by an intelligent agent to produce action. Further, the page’s discussion of the physical symbol system hypothesis discusses that symbols used by intelligent agents can “refer to things in the world” as well as “useful concepts” or “internal states of the agent.” Your question has led me to consider whether the distinction between physical symbols and “theoretical” is necessary, or whether all could be considered physical symbols.

      Delete
    2. Zoe We are talking, this week about what computation is. Computation is the manipulation of symbols, based on rules, operating on the symbols’ (arbitrary) shapes. Symbols are physical objects (even when they are just scratches on paper or tape, or states in a computer. Their shapes are arbitrary, in that symbols do not resemble what they stand for (if they stand for anything) but they do have a “shape”, and the algorithm that indicates how to manipulate the symbols operates on those shapes. Both the symbol shapes and the manipulations are physical, if the computation is actually being executed.

      We’ll get to cognition later,

      Shona, in computation (which does not necessarily have anything to do with cognition), the symbols and the manipulations are physical, but the manipulations don’t have to be done by a human user; they can be done automatically, by a computer (or Turing Machine), which is also physical. It’s the meaning (if any) of the symbols and manipulations that’s in the head a user.

      Please look at my replies about that a Turing Machine can do, and how.

      Delete
  10. I found the second article's points on the role of agency in computation to be fascinating, particularly in relation to the instances of physical and biological processes such as DNA replication. The reading states that agents can be other than humans or computers who simply follow steps in symbol manipulation. Thus, any physical system that manipulates symbols, defined in the third article as a "meaningful pattern that can be manipulated", serves as a computational agent.

    Furthermore, I found it interesting how the third article spoke of the knowledge level, which is what the computational agent "knows and believes and what its goals are", in relation to biological and physical computational entities. While the idea of biological processes being computations made intuitive sense for me, I had never before considered how it could be described as having beliefs and goals.

    I was also a bit confused about the author of the second article summarizing computation as being a "discrete process, a sequence of states that are defined by symbols", after the discussion of continuous examples of computation. I'm not sure if I'm on the right track, but I understood the discussion as meaning that while continuous physical processes such as our brains' functions and DNA replication can be considered computation, the distinction of discrete and continuous steps of computation becomes too abstract and irrelevant the more deep you go into it. Would someone be able to clear this point up for me?

    ReplyDelete
    Replies
    1. Please read my other replies about what computation is, and then tell them to me in your own words. Ignore what the authors say about cognition for now. This week is only about what computation is.

      Delete
  11. The readings from this week have me thinking more about the boundaries of what we call computation. When reading about Turing machines I thought about how, since a Turing machine is an abstract machine, if we think about the brain as a Turing machine then could we label the computations of the brain cognition? Later on, this idea that cognition is similar to brain computation was very interesting to me and got me pondering more about cognition as computation. Is all thought computation? If we think of computation as universal information processing, does that mean that anything which makes the metaphorical gears in our heads turn count as computations? With this idea in mind, if our thoughts are simple reflections or even dreams, does this kind of mental activity count as computation or does computation have to be a more active kind of information processing? Or, if we think more generally of computation as ‘rule-based symbol manipulation’, than would that mean that any activity, including dreaming, is just simple derivations of meaning from the symbols in our heads. And, even though it’s a passive kind of mental activity, the fact that the brain is working through things in the first place is computation? I have many questions.

    ReplyDelete
    Replies
    1. Hi Fiona,

      From what I understand, the nature of computation and cognition is a complex topic that is still being debated by philosophers and scientists. Some people believe that all thought is computation, while others believe that there are some aspects of cognition that are not computational in nature. If we think of computation as universal information processing, then it is possible to argue that all thought is computation. However, I think some people might believe that computation requires a more active kind of information processing. For example, they might argue that dreams are not computations because they are passive and involuntary. Ultimately, the question of whether or not all thought is computation is probably a matter of definition. 

      Delete
    2. Hi Lili,
      Why can't dreams be considered active information processing? Or rather, why would unconscious information processing not be considered a form of computational analysis? Sleep research shows us that different phases of sleep cause various physiological reactions, for instance changes in brain waves that can be measured with EEG. This would suggest that some form of thought/feeling (i.e. Input) is being processed/experienced (i.e. output) whether we remember our dreams or not. Further, I would posit that if we could feed the symbols of our dreams into the Turing test, it would be able to replicate/program the dreams. This behavioural equivalence would suggest that some form of computation has occurred. Nevertheless, we don't quite understand the How (i.e. how do we process dreams? How do we interpret the features of a dream?), meaning that local computations cannot explain, in full, what cognition is.

      Delete
    3. Hi Kristi!
      If we are to tackle the difficult topic of the nature of computation and cognition (which I’m sure we will delve deep into this semester), from what I know I agree with Lili in that dreams are not computation. This is because, as the readings explain, computation is symbol manipulation producing some output, using rules (algorithms) and shapes (thinking of the physical turing machine). It requires input from the external world (which we take in with our senses) and output as behavior. In the case of dreams, neither of these are present. The EEG readings, synapses being made, thoughts/feelings of the day that might be manifested in dreams, and the experience/memory of dreaming I would think are all classified in the “processing” part of computation, lacking input and output.

      It can be perhaps argued that much of cognition's “outputs” are thoughts and feelings, as we experience a lot more than “behavioral” outputs. Or maybe output is conscious thought, or thoughts/feelings that we are actively aware of, in which dreams do not qualify (unless you’re lucid dreaming?) Could we say that behavior is only influenced by conscious thought? (but no! There are many phenomena in which subconscious thoughts influence our behavior). And now I am using weasel words and opening parentheses and this brings up too many questions.

      This is a difficult question because unlike the cochlear implant example, which replaces a more automated/unconscious part of cognitive processing, complex thought and dreams are very complex and vague. To really find answers, it would be important to define output in cognition, and to master the pressessing of the brain. But even then we would be left with the hard question that would still cause confusion with the nature of computing and cognition.

      On the note of feeding the symbols of our dreams into a Turing machine, I can more recent recall studies that support this (not dreams yet I think, but signals from the frontal cortex - which are where dreams are processed - of more concrete thoughts) and computers being able to correctly “interpret” these signals! So cool.

      Delete
    4. Before we can ponder whether the brain is doing computation, we have to be clear on what a Turning Machine or a computer is doing, because that definitely is computation! What philosophers and scientists are debating is not what computation is. Turing (and Church and Kleene and Post) explained what that is, in the 1930s. What philosophers and scientists debate about is what things, besides Turing Machines and digital computers, are also just doing computation (in whole or in part). We'll get to that.

      Delete
  12. In the “What is a Turing Machine?” reading by Copeland, it took me a while to understand what a Turing machine truly is (or more so how to visualize the way it functions). After some reflection, and visualizing from YouTube I think I have a better grasp of how it works. The tape functions essentially like RAM, standing in for our working memory and functions as what the machine reads as input as well as what it outputs. The Turing machine then begins to read the tape and then depending on the state and atomic operation assigned to said state (based on the instructions table) the machine will change what is on the tape until it arrives to the answer it was programmed to find.

    What I still have a hard time grasping is the “Uncomputable numbers” section of the Copeland reading. Is the issue of uncomputable numbers just that there are some numbers where the decimal places cannot be described by a finite table of instructions and thus the Turing machine will never be able to describe them? Is the issue ensuring that what is computable must be held in a finite level of instructions, even if tape can in theory go on forever? From what I gather from the definition Conery gives in the “Computation is Symbol Manipulation” this seems to be the case, but I am unsure if I fully grasp where finite ends and infinite begins.

    Another question is to bring it back to cognition. Is cognition simply the ability to follow a set of finite instructions until we arrive at the solution we hope to arrive to. How far might we want to remove ourselves from preconceived notions of how we think?

    ReplyDelete
    Replies
    1. In a course on computation we don't have to worry about uncomputable numbers, because if computation cannot compute it, then a cognizer can't compute it either,

      Delete
  13. I found the “What is a Turing machine?” reading very interesting as well as informative (at least for me since I didn’t have much knowledge about a Turing machine and how it works) because it breaks down the components of a Turing machine in a very understandable manner, presenting the physical structure of the machine, as well as how it works to perform the six types of fundamental operations during the course of a computation. I also liked that this reading can be related to the “Computation is Symbol Manipulation” reading because it supports the definition provided by the author of that text which is that a computation is “any process that maps an input state to an output state, where each state is a collection of symbols”. Indeed, in the “what is a Turing Machine” reading the author presents a computation as a sequence of transitions because when the machine performs an operation, it starts in one state and then it reads a symbol, writes a new one and then moves on to the next state, going from an input state to an output state.

    ReplyDelete
    Replies
    1. Yes, as in a Turing Machine, a computation is a finite sequence of finite states. Think of factoring a quadratic equation (whether or not you remember what factoring and a quadratic equation are. aX**2 + bX + c = 0 The algorithm (formula) is like a recipe you follow, manipulating the ingredients one after the other to bake a (vegan) cake. [If you've forgotten the recipe, look it up with google and then apply it, step by step to 2X**2 + 3X + 7 = 0 (but I can't promise X will be a rational number because I just made the example up! I can promise, though, that that will be the only computation you'll do in this course!)] In that computation YOU were being the Turing Machine.

      Delete
  14. These readings helped me understand computation ultimately as the manipulation of symbols, which rely on algorithms to dictate how symbols will be manipulated. The idea of agency discussed in the second reading (Computation as Symbol Manipulation) really interested me, especially in the context of the brain. As I understand it, an agent possesses the algorithm to carry out a given computation, and connects the input symbols to whatever the output is. I began thinking about the brain as a computer (if it is one at all) and considered what the agent is in various ‘cognitive’ processes (I recognize that cognition was identified as a weasel word in class but I’m not sure what else to call them here). For example, when we make a decision, it would seem that we as humans are the agent in carrying out the various operations to eventually arrive at an output–the decision. However, biological and electrochemical processes are also at play, controlling the way in which neural circuits relay and consolidate signals to eventually arrive at the decision. In the reading, Conery identifies biochemical processes as agents themselves. Therefore, my curiosity is concerning who we consider the agent here? The physical, electrochemical, and biochemical mechanisms which all partially dictate how information from external and internal sources will be processed, or the human itself, which possesses the brain in which this is all happening?

    ReplyDelete
    Replies
    1. Well, you did understand what computation is; the confusion just set in when you stqrted contemplating whether cognition -- the non-weasel word for it is "thinking" -- is just computation. We'll get to that; till then, hold on to what definitely is just computation.

      Delete
  15. I found the second reading cleared up a lot that I found confusing from the Turing Machine reading. Computing is the process of turning one set of symbols into another, such as the Sieve of Eratosthenes example which turns a set of integers into a list of prime numbers through a series of steps. If I understood correctly, this "process" is the set of rules defined in the algorithm. A point this paper brings up is that the size of a “step” can be ambiguous, as many steps contain smaller substeps that are also smaller computation steps. This makes sense to me as an example of how the mind works, such as last class when you asked a student to remember the name of a former teacher -- “remembering” requires an immeasurable number of smaller mental computations that feel like one step when thinking.

    ReplyDelete
    Replies
    1. The steps are not ambiguous in computation. They are ambiguous when it comes to thinking, because thinking may not be just computation.

      Delete
  16. As someone with no computer science background, I am currently finding it challenging to think in terms of the symbols, states and steps that describe the computations done by computers. The readings this week filled me with more questions than answers, though I hope to gain a better understanding of computation as I move through this course. A passage that I felt was easier to understand(kid-sibly), and which interested me was a portion of “What is Computation” which discussed mental arithmetic and behavioural equivalence. For one it helped me to picture the steps of a computation by imagining they were written on paper, for example, to solve an arithmetic problem(Carry the one, add/subtract etc.). It helped to also understand how storing steps from previous equations solved in memory would accelerate the problem-solving process. While it feels entirely different to complete a complex mathematical problem in my head, compared to when I am able to use physical tools and symbols, I can see that whatever order I do the steps and whatever resources I use it is considered to be behaviourally equivalent(does the same computation) as long as it consistently produces the right answer. That is if a system is able to take an input and arrive at the same correct output as another system.

    It is somewhat manageable for me to get my head around a human brain and a computer similarly computing a math problem. However, to think of all my thoughts in terms of computations(even for purely metaphorical purposes) feels more difficult. There is always input(visual, auditory, olfactory and tactile perception), and there is certainly memory, but breaking a complex inner world in steps, into algorithms feels difficult.

    ReplyDelete
    Replies
    1. Hi Josie! As a psychology and biology student, I really understand your struggles. It's really hard for me to think about computations with algorithms (maybe because I'm really bad at maths?), but I can understand the logic from the human brain's point of view.
      What is even harder to imagine is that computers could be able to understand human feelings. Computation is a 0 or/and 1 process, so human thinking is way more complex. How can we recreate that? How can T3 understand feelings and errors? How to create a machine that understands mistakes, even makes them, with this simple system as its basis?
      I hope this class will enlighten me on these questions!

      Delete
    2. Josie We’ll talk about “Weak Equivalence” and “Strong Equivalence” between two computational systems during my lecture. One is same input/output and the other is same input/output and same algorithm. One question is: If computationalism is right, and cognition really is just computation, would we want/need Weak Equivalence or Strong Equivalence?. But don’t wrack your brains too much about that, because “Stevan Says” computationalism is wrong. The right conclusion, however, is not that cognition is not computation at all (as Searle will argue), but that cognition is not all computation.

      And, yes, some dynamical systems are more easily explained as continuous analog processes rather than discrete digital computations.

      Lena, you’re right to puzzle about that. It could turn out that feelings are analog: biochemical rather than computational. (But even then we’d have the “Hard Problem” as well as the “Other-Minds Problem. So it would still be hard to imagine or understand.)

      Delete
  17. As someone with very little knowledge on computer science, I found the readings very helpful in providing some of the basics behind how the very computer I am using right now functions. In the “What is Computation?” reading, computations are defined as “a sequence of state transitions, where a state is defined by a set of symbols”. I especially appreciated the description of the role of the agent in the relationship between algorithms and computations. Interactive computations, having humans as agents where their actions inform the computation and acted upon the algorithms made me think of playing video games. I find that interactive computations are well portrayed in how the players (agent) and their choices/actions affect their virtual environment. In “What is a Turing Machine?” reading, I particularly found the non-terminating Turing machine description fascinating. Simply put, a terminating Turing machine program was described as a program that terminates any input, whereas a non-terminating Turing machine may run forever. The examples of air traffic control system, automated teller machine networks and nuclear reactor control systems were provided for real world applications of non-terminating Turing machines. Based on the latter examples, I was wondering if in real world application, non-terminating Turing machines always have the possibility of being terminated, let’s say to be rebooted or shut down all together, or if there is any special cases in which that is not possible.

    ReplyDelete
    Replies
    1. Fortunately, although it is interesting detail about computation, neither uncomputable problems nor non-halting problems are relevant for cognitive science as far as I know.

      Delete
    2. I also found this component of the turing machine interesting; the question of what happens if it is a non-halting problem and just keeps repeating a behavior infinitely with no end and with no progress. Maybe the fact that an never ending repetitive behavior seems irrelevant to cognitive science is because at some point humans are aware of a behavior or thought that is using up and wasting energy without a movement, change or progress and they will thus stop that behavior. That may in fact be one of the aspects of humans that indeed differentiates us from any machines; most of our functions were created for survival, including the way our mind processes and uses information, unlike a machine which has a program written for an explicit purpose. Looking at human evolution, we were built for survival and not intelligence or creativity, perhaps this gives insight into some of the struggles we encounter with machine programs not knowing how to self correct when they are not functional.

      Delete
    3. If computationalism is true, then the brain would stop trying an unhalting algorithm because it's not working.

      Delete
  18. From this week’s readings, I found “What is Computation” by Ian Horswill to be the most enjoyable. The first thing that caught my attention was Horswill’s pushback against the use of a functional model to explain computations. Having taken cognitive science and affective neuroscience classes, I was surprised to see this stance. In those classes, using a functional model to explain phenomena such as the visual system and emotion-like behaviour proved quite useful for attempting to understand the emergence of these systems. I found it interesting that for the case of computation, functional models are insufficient since they fail to capture instances of computations that do not produce a sole desired output, and the example of the cell phone and how its software never stops running really worked to ground this point and convince me that the functional model is indeed limited.

    Also, from “What is a Physical Symbol System”, I thought the concept of an appropriate level of abstraction maps quite well onto the relationship some of us have with the physical sciences. In the article, a delivery robot is said to be able to map the environment at high levels of abstraction, but while these levels are easier to understand, discussion at lower levels might be more useful for solving the problem. At a low level of abstraction in the physical sciences such as Physics, one might come to grasp thermodynamics and the interactions between subatomic particles, but at this level of abstraction, it may be hard to reason out how a chemical reaction, for example, occurs. Abstracting further, understanding certain chemical processes might not be the most useful in trying to explain macro-scale biological and ecological processes. These macro-scale processes, possibly by virtue of their high level of abstraction, may be easier to understand and more grounded for many people. But similar to the case of the delivery robot, it may be easier to understand a disease at the biological/population level (high level of abstraction), the disease may be more easily combated through an understanding at a biochemical level (lower level of abstraction).

    ReplyDelete
    Replies
    1. I'm not sure levels of description in the sciences are related to higher and lower level programming languages. But are you clear on what computation itself is, as Turing formalized in the Turing Machine?

      Delete
    2. Stevan: According to Alan Turing, what is a Turing Machine, and what are the things it can do? Stevan: According to Alan Turing, what is a Turing Machine, and what are the things it can do?

      ChatGPT: Alan Turing introduced the concept of a "Turing machine" in his 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem." The Turing machine is an idealized mathematical model that captures the essence of algorithmic processes. Here's a brief summary:

      A TM is composed of an infinite tape divided into cells, a tape head that moves left or right across the tape, and a finite set of states in which the machine can be at any given moment.

      The tape contains symbols from a finite alphabet. One of the symbols represents a blank, which indicates no information.

      The machine operates by reading a symbol under the tape head and transitioning to a new state based on its current state and the symbol it reads. Depending on its state and the read symbol, the machine can change the symbol on the tape, move the tape head left or right, or halt.

      Computation: TMs can perform any calculation that can be algorithmically described. Anything computable is computable by a TM.

      Stevan: According to Turing, what are mathematicians doing when they say they are computing?

      ChatGPT: In Alan Turing's seminal 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem," he addressed the notion of what it means to compute. Turing described a "computing machine" (what would later be called a Turing machine) as an abstract representation of a human "computer" — a person who carries out calculations, particularly in the mathematical sense.

      According to Turing: When mathematicians say they are "computing" a number, they are typically engaged in a definite method or set of rules that, when followed, would result in the determination of the number in question.
      This process of computation is mechanical and deterministic. It does not rely on intuition or ingenuity but follows a step-by-step procedure.

      Turing then used this intuitive understanding of computation to formalize it with his abstract computing machines. The purpose of this abstraction was to capture the essence of this mechanical, rule-governed process that mathematicians engage in when they say they are computing.


      Stevan: Is it fair to say that by “computation,” Turing meant manipulating symbols according to an algorithm (recipe) for manipulating the symbols on the basis of their shape -- which is arbitrary, not their meanings -- as a Turing machine does?

      ChatGPT: Yes, that's a precise characterization. Turing's conception of computation revolved around the idea of symbol manipulation based on formal rules. The Turing machine operates purely on the basis of the shapes of the symbols (their physical or syntactic form) without any regard for any potential meaning (semantics) the symbols might have. The machine processes symbols according to a predefined set of rules or an algorithm, regardless of any meaning those symbols might carry outside of the machine's operations.

      This idea underscores the core of the symbol grounding problem: the challenge of how symbols get their meaning, especially in the context of computational systems. In the Turing machine model, symbols are devoid of intrinsic meaning and are manipulated purely. based on their shape, as dictated by the machine's transition rules.


      Stevan: In Turing computation, can any object, of any shape, be used as a symbol for manipulation according to the algorithm?

      Delete
    3. ChatGPT: Yes, in the theoretical framework of Turing computation, the specific nature or appearance of the symbols is arbitrary. What's crucial is that the symbols are distinguishable from one another and can be consistently identified and manipulated according to the machine's algorithm. In essence, any object that can be consistently differentiated from other objects can serve as a symbol in a Turing machine. The symbols on a Turing tape could be represented by various shapes, colors, or any other distinguishable characteristics, as long as the machine's transition rules can operate on them unambiguously.

      Delete
  19. Before reading 'What is a Turing Machine,' I had a vague understanding of the Turing machine. I thought the Turing machine was a physical device, perhaps the simplest version or model of a computer, not realizing that the Turing machine is actually an idealized device. However, my perspective changed as I delved into the material and discovered the significance of the Universal Turing Machine concept in the field of computer science. This concept demonstrates that a machine can handle various types of computing tasks, depending on the flexibility of its programming.

    "What is a Physical Symbol System" helped me address a question that had puzzled me during my study of philosophy: why is philosophy often challenging to comprehend due to its complexity and strict writing style? Now, I understand that this difficulty arises from the high level of abstraction in philosophical texts. Philosophers prioritize precision in detail to enhance predictability, resulting in intricacies that can make their writings less accessible.

    I'm curious about the rationale behind not replicating every facet of human capability when creating an AI agent. Instead, the focus seems to be on emulating higher-level cognitive functions and harnessing modern computer technology for their development. Could you provide more clarity on which specific higher-level functions are being emphasized in AI development?

    ReplyDelete
  20. The readings for this section answered the questions I had surrounding what computation is, and how to go about thinking of it for the purpose of this course. I found that the paper by Conery (2010) clearly explained computation as “a sequence of simple, well-defined steps that lead to the solution of a problem” (2). In this paper Conery explains that a problem, as well as the solution, are encoded as symbols (which is defined in the reading on physical symbol systems as “a meaningful pattern that can be manipulated). Thus, a ‘step’ to solve a problem consists of manipulation of these symbols. I found Conery’s explanation of computation particularly helpful in that it described computation, a process which can seem highly technical and abstract in plain and easily understandable terms.

    I also found Conery’s discussion of physical systems and continuous operations especially interesting. Conery claims that “every computation carried out by a physical system,” such as a human brain, is “the result of continuous operations” (4). Conery defines continuous operations as operations where “it becomes too difficult, or even impossible, to describe the operations in terms of symbol manipulations” (3). This claim led me to question the implications of the argument that cognition is computation. From this perspective how can we understand what cognition is, if it is too difficult to describe the manipulations and steps made throughout this process?

    ReplyDelete
  21. After reading this week's articles, most of my questions on the first reading were answered in previous comments.
    However, I still have few unclear points on the reading "What is a Physical Symbol system?".

    I understand that this is difficult to choose the correct level of abstraction, depending on the situation. "The lower the level, the more difficult it is to reason with", but lower level are also more predictable and accurate. How is it so, if there are more difficult to reason with?
    Lower level are neutral level but once understood they seem less difficult to reason with.
    It say the reason why it's because it involves more steps - does it means that the goal is to improve the level of abstraction? More steps in the processing to have more specificity and a better understanding?
    In the case of not knowing the information at low-level understanding, how can the computer compensate this gap? By mixing level of abstractions?

    Finally, these questions made me think about a diagram I studied in Psychology of Pain, from Noam Chomsky named "Stages of objectivity and subjectivity". Objective, verifiable science like Chemistry restrict itself to simple questions. If the system they study become too complex for their speciality, they hand it to biologist, who can hand it to psychologist and so on. The more complex, broader, the system is the more subjective it becomes (with historians and novelist at the more subjective axe).
    I found that interesting, similar to the level of abstraction for computers!

    ReplyDelete
  22. After reading ‘What is a Turing Machine?”, I was somewhat confused, so I decided to watch a video explaining the machine more simply. From my understanding, the Turing machine breaks down complex processes into much smaller, simpler steps. For instance, if I prompt it to carry out a complex mathematical sequence, it will ‘read’ each individual number I feed it and then consult an algorithm which will tell it what to output in response. In this way, a process which overall would seem to require some form of intelligence is broken down into smaller ‘copy and paste’ steps. If this machine is used as a sort of representation of human cognition, we can think of it as our minds taking in individual pieces of information in rapid succession and then carrying out computations which lead to some sort of output - thoughts, motion, words, maybe all of the above. My question is, if that is true, wouldn’t that remove any sort of autonomy from cognition? If my mind receives an input, and a series of involuntary processes happen beyond my control based on a predefined ‘algorithm’, then what makes me different from a computer? More complex processes? More going on? And if that is so, how to feelings and cognitive states factor into the equation? Are emotional states simply outputs to certain stimuli? Furthermore, are different people’s ‘algorithms’ based on different personality and genetic factors which shape their reactions and thoughts? Lots of questions.

    ReplyDelete
  23. The readings were quite interesting, and in unison seemed to point to one conclusion.
    First I am presented with the theoretical machine that can could accomplish any computation, then I am presented with the basics of computation, wherein it is explained as input-output symbol manipulation, and finally I am presented physical symbol models that model a world/subjectivity. Taking these in reverse would seem to imply that the Turing machine, capable of any computation, could accomplish the computations (or some equivalent) of physical symbol models, such as a mind.
    What still remains difficult for me, is the idea that a Turing machine could abstract onto the semantic level. I am reminded of a passage of Plato's, where Socrates expounds on the Pastry maker* .The pastry maker knows the steps to create adequate sustenance, the exact series of transformations to generate that which is needed for humoral balance, but while executing their algorithm perfectly from A to E they have no knowledge of what purpose or meaning the steps have. I feel like the Turing machine running the series of steps would be not unlike the pastry maker, after all ChatGPT has no idea what its saying, but I'm sure further readings and lectures will help, perhaps I've misunderstood abstraction.

    *(rough translation, no modern equivalent exists for the profession described)

    ReplyDelete
  24. Throughout the readings of 1a I found myself trying to compare the Turing machine to the brain, and computation to abstract thoughts or cognition.
    From Jack Copeland’s and John S. Conery’s readings, if cognition operates like the Turing machine, then it should be representable as a finite number of symbols with a programmable head that controls atomic operations (reading, writing, halting).
    For thought, what is a symbol? Is it a neuron? A neurotransmitter? An idea? A letter of a thought (if it is in words)? Perhaps the binary of an action potential firing vs not firing? The action potential seems easiest to work with. The atomic operations would be the firing (or not firing) across various synapses, and eventually halting. The programmable head would likely be our own in this case. The agents from Conery’s article would be neurotransmitters, as they cause the state transitions of neurons from non-firing to firing (two possible states). Alternative to thoughts, visual processing seems more computational than abstract thought, as the discrete states would be each level or processing, where one aspect of vision is added to our understanding at a time (such as color, depth and motion). Copeland’s reading also provoked the questions of: if thought is computation, what determines the input and what determines our rules of operation? Is it nature/nurture? The laws of the universe? Perhaps “thought” is too vague. For cognition related to our senses, the input is the environment.
    Another application of the article to cognition is that while the machine has high speed and high working memory, human cognition is less so because it is not abstract, it is restricted by chemical reactions and natural processes.
    The DNA transcription example demonstrates computation very well. This may be because it is a very specific process, with clear symbols, a ribbon (like the Turing machine) and obvious steps. Perhaps cognition is more clearly computation when broken down into more fundamental processes, and when represented in specific ways.

    It does seem that cognition and thought are computation, they just have to be represented in the correct way.

    As for levels of abstraction, as discussed in the third reading, this made me think of the phrase “understanding the big picture”. We see the big picture by taking a step back, which illuminates our understanding of what is going on and what we should do. High level abstraction is making something more abstract in order to be easier for humans to specify, understand and act on. The two ideas are very similar. Therefore, perhaps seeing the big picture requires a higher level of abstraction of the concepts being understood. It is possible I am stating the obvious but this made me think.

    ReplyDelete
    Replies
    1. Thank you Nicole for this fascinating insight! It seems logical to me, I’d be curious to see your theory debunked. If I may add to your last point, I also think of abstraction as having a negative relationship with detail, and if you are too focused on minute details you won’t be able to see the big picture. I wonder if there is some kind of special biological mechanism that humans have that interpret the sum of all individual action potentials (signals, as you suggested) as seeing the big picture and being self-aware, as opposed to a Turing machine which mindlessly (I hope) processes a sea of binary codes.

      Delete
  25. Reading 1a: What is computation
    Upon completing the readings for this section, a concept that really stuck with me was that Turing machines are capable of computing anything that modern computers can compute. This confused me a little at first (mostly because I feel Turing machines are more complicated than modern computers) but was then explained by applying the idea of behavioural equivalence between computers and Turing machines to something simple like addition. This provided a missing link in my understanding of the switch from Turing machines to computers (information revolution). The switch was not about correctness but rather optimization, a switch that is now repeating itself with humans and AI.

    ReplyDelete
    Replies
    1. You've made an insightful connection when you mentioned that the shift from Turing machines to modern computers was about optimization rather than correctness. Modern computers are more efficient and user-friendly, making them practical tools for a wide range of applications. And you're right, this evolution from theoretical foundations to practical optimization is repeating itself with the development of AI.

      Delete
  26. Hi Kaitlin! You asked some really interesting questions in your post, and I had similar ones especially about the Turing machine. In theorizing about the differences between Turing tests 1-4, where is the line differentiating the more advanced Turing machines from a human mind? Your idea that the collective human sentience could form a universal Turing machine poses an interesting answer to my question, that maybe the goal is for there to not be an obvious line. If that ends up the case, while there are terrifying implications to machines being indistinguishable from humans, there are also positive ones like you brought up with reviving brain dead patients (although if they ended up behaving as an AI programmed to act like a human, that could be equally terrifying).

    ReplyDelete
  27. As a computer science student, the last thing I anticipated from the initial readings of a psychology elective, recommended by a friend, was a complete shift and expansion of my perspective on computation. Previously, having taken COMP 330 Theory of Computation, I had primarily associated computation with the execution of actions by computers and electronic devices in general. My recollection mainly revolved around states and transitions that characterized a machine; however, this was merely another exercise in computer science, destined for grading. I didn't pause to consider if there was more depth to it.

    Yet, the readings, particularly "What is Computation?" and "What is a Physical Symbol System?", advocated the notion that any intelligent agent constitutes a physical symbol system encompassing both computers and human minds. It emphasized that the brain is also a computational system, extending beyond just computer processing. This lead me to revisit the class question: "Can you recall your third-grade teacher's name?" I did remember, but was to shy to answer. Subsequently, I tried to reverse-engineer my memory retrieval process. What stood out the most? How I was able to play video games if I finished my work early without a doubt. But how did this relate to recalling a name? Why could I recall that teacher's name, yet struggle with those from my fourth and fifth grade? They were excellent teachers, and I enjoyed my time in their classes. Upon further thinking, it clicked. I used to boast to students in other classes about being able to play games during lessons, leading them to ask, "Really? Who's your teacher?" There's likely more depth to this, but this example vividly illustrated how our brains are capable of executing intricate computations. Indeed, the other reading by Conery underscored that a computation could be as straightforward as a mental calculation like 1 + 1, or the effort to recall a name from the past and discern how you retained that information.

    ReplyDelete
  28. From what I've understood, the Turing machine is basically an idealized machine that can compute. It is insane to me how much thought and imagination must have gone into this machine. The fact that a Turing machine can compute anything a laptop manufactured today can do, in my opinion, goes to show the attention to detail in this idealized machine. Something that is boggling my mind a bit is the Universal Turing machine. I want to confirm if what I've understood is correct: It's the same machine but the machine table is translated into a universally known language, such as binary code. If that is the case, what is the use of having multiple different Universal Turing machines? Isn't one set of universal symbols enough?

    ReplyDelete
  29. The readings on "What is computation" this week have given me a more profound insight into the parallels between biological systems and computational entities, particularly the abstraction levels of knowledge and symbols. Even though I’ve taken computer science courses, this was the first time I was introduced to computers and human minds as physical symbol systems. I found the phrase “if you can’t tell apart a robot from a person, then the mechanism inside the robot is cognition” particularly interesting. This made me reflect on which levels of abstraction emotions would fall in since they are one of the more important aspects that influence cognitive processes in humans, judgment, and decision-making. Emotions span across multiple levels of abstraction and involve intricate neural pathways and neurotransmitters unlike computational entities that follow well-defined algorithms, a precisely defined problem and sequence of steps (What is a Turing Machine article).

    I wonder how far the boundaries of algorithmic computation can be pushed and whether it can ever replicate the complexity and depth of the human mind and cognitive processes.

    ReplyDelete
    Replies
    1. To build on Rosalie's comment that brought up some of the questions I was asking myself - what about emotions regarding classical CTM? After having taken an Affective Neuroscience course last semester, I am aware of the intricacies of emotions and emotion processing. As the reading already explains the analogy of software/hardware for mind/brain has been quite criticized, I would not even try to find an analogy for emotions. They, indeed, span across multiple levels, different areas of the brain, different cells and can even change within one. This presence spread all over the brain but also its impact on several different executive function does not seem to ‘fit’ in well in the computation.

      Delete
  30. In the reading "What is a Physical Symbol System?", two levels of abstraction are outlined: the knowledge level and the symbol level. These are said to be common in both living things (biological entities) and computers (computational entities). However, I don't quite agree with the depiction of the knowledge level. While both entities can be described at this level, the way they 'know' strikes me as very different.

    When we think of computation as a rule-based manipulation of symbols, purely based on their shapes, it makes sense that computers just run the right algorithms to get the right outputs. But this raises the question: what do they really know? If you ask them about knowledge or goals, they will apply the right computational steps to get the right answer (in some cases, that would mean retrieving data from their memory). I believe human cognition, even if we can reduce it to computation - to a certain extent - goes beyond this. Humans experience a "feeling" of knowing. Following Searle’s Chinese room argument: even if he followed the right program for manipulating Chinese symbols, just as a computer does, he could answer appropriately in strings of Chinese characters, without attaching any meaning to those symbols.

    But then, how do we define 'knowledge'? What are the criteria to know something? Is it just about retrieving data and being able to output it? What does 'understanding' really mean? While I sense a difference between knowing and not knowing, it could just be an illusion of comprehension. A human's knowledge level could be of the same abstraction level as a machine’s knowledge level, but we've cultivated a deep-seated belief that we perceive meaning transcending mere symbols.

    ReplyDelete
  31. Hello! In "What is computation? Computation is symbol manipulation" one paragraph particularly stood out to me. Conery explains that although all algorithms define computations, some computations do not need algorithms.
    He then uses the example of DNA translation. I guess I may not have a good understanding of what an algorithm is but I don't see how "the instructions" to mapping gene sequences to protein sequences are non-algorithmic... Is it a question of complexity -as in this computation is too simple to require an algorithm? Or is it a question of agent -as in this computation is nont-algorithmic
    because no individual chose the instructions?

    I guess my question is how is that not an algorithm and what is an
    algorithm?

    ReplyDelete
    Replies
    1. I was wondering the same thing during the reading. To my understanding, Conery points out that computation is related to the specific transitions between steps and not to the agents completing them. This led me to believe that algorithms specifically have to do with the agent involved in the process. As the reading states, DNA translation is a result of a “biochemical process” that acts as the agent. Again, this seems to indicate that because of the specific agent involved, the computations are not a result of following an algorithm's specific steps. While all of the specific requirements of an algorithm are still somewhat unclear to me, I believe that the factor determining that DNA translation is non-algorithmic is due to the agent of the computation rather than the steps it completes. As the process still transitions between these steps using symbols, it can be seen as a computation while not following a specific algorithm.

      Delete
    2. I have the same question again! I think that, as Jenny says, the distinction between algorithmic and non-algorithmic computations is in the kind of agent that is performing the computation. For a computation to be algorithmic, it has to be performed by an agent capable of following the algorithm (so either a human being who can understand the instructions, or something like a Universal Turing Machine which is set up so that it can be given the algorithm as an input and execute it). The enzymes involved in DNA transcription can't be programmed to follow instructions, so the state changes they compute are not algorithmic.

      However, this definition of an algorithmic computation doesn't mesh with Conery's later example of interactive computations. He says that things like a human keyboard hit being encoded symbolically in a computer do not follow an algorithm, even though the agents involved (humans and computers) should both be capable of following algorithms.

      Maybe this example is a non-algorithmic computation because, in practice, the human hitting the keyboard doesn't think of themself as following a set of instructions in order to perform that action? This seems to imply that human beings have some extra property that computers don't, like free will or an element of randomness, which allows them to perform actions that aren't algorithmic even though they are perfectly capable of following algorithms.

      In terms of the computer in this example, although I've already claimed that computers are agents capable of executing algorithms, I think that this computation specifically is not considered algorithmic because it involves the transduction of a keystroke from a physical signal to an electronic one, which happens by mechanical means and not because the computer is following some code. This makes it more comparable to DNA transcription than the same computer executing a program. But then there's a further problem here: Conery himself argues that all computations become continuous at a low enough level of abstraction. The distinction between the computer as an agent and the mechanical process of keystroke encoding only exists when we look at higher levels of abstraction. At the most basic level though both are just the motion of atoms, and atoms aren't programmable, so the distinction between algorithmic and non-algorithmic computations breaks down.

      It's a thorny idea for sure, and I'm still not sure if I've grasped it correctly.

      Delete
    3. I was (and probably still am) similarly confused on this point. In the paragraph before, Conery does note that a computation is defined by a sequence of symbolic states, and "not the nature of the agent that carries out the computation". He goes on to explain that if we extend the computation metaphor to natural processes like DNA translation, the definition of "agents" that follow the steps of an algorithm expands to include biochemical processes. So if a biochemical process is an "agent" acting on the mRNA sequences, my best assumption is that the biochemical process cannot also be the algorithm, meaning it's just an agent acting out a single computation based on a set of rules, and will not always follow the same algorithm to the same "proper final solution".

      Delete
  32. The readings brought to my attention how difficult it is to truly define computation, as it is not something I had put too much thought into prior. I previously thought of it as simple programming, but now I am aware of the kinds of algorithms and almost ‘question and answer’ systems that computation really is. Additionally, reading about how computation’s definition is fixed, agreed upon and will most likely not change is interesting since I normally think of science and psychology as ever changing as new research emerges. Despite this, I was equally surprised to find that the concept of whether cognition is computation is still in a flux and is not universally agreed upon. The specific idea of the “imitation game” in the “What is Computation?” reading was particularly striking to me as it implied a lot about his stance on learning and behavior. I never previously thought of computation as an imitation game, but after reading this section it became very clear how it was, and helped solidify the new definition of computation I know due to these readings. The “imitation game” also reminded me of how many children learn language and how to do simple tasks by imitating other people’s actions.

    This comparison brought a question to my attention on whether the writer of this piece would argue that the computer can learn and adapt similarly to humans, and what he would consider that, if not a form of intelligence. Additionally, due to his opinion on the definition of intelligence, it brings to mind what the author would consider as emotional intelligence. Could a computer have emotional intelligence because it is programmed to know emotions even though it does not feel itself?

    ReplyDelete
    Replies
    1. Hi Delaney, I completely agree with you on how interesting it is that our understanding of computation has expanded far beyond the aspects of simple programming. It is really cool how computation as a whole encompasses such a wide degree of systems and algorithms that resemble so many different types of processes. Most of which nowadays are exceedingly intricate. The idea that the definition of computation as a whole is relatively fixed is in itself intriguing to say the least. The imitation game concept that you brought up struck a chord with me as well. It definitely aligns with how children learn by mimicking actions, which like you said, begs the question of whether computational programs can in fact possess or at least gain emotional intelligence simply from imitating them.

      When looking at human behavior, most of our societal norms and how we act on a day to day basis is inherently learned. Whether from parents or friends, how we act is often a product of our environment. How would you calculate the emotional intelligence of a baby? If a child were to grow up without any external environment, surely they would not have even close to the same emotional capabilities as the average human. Could this be seen as a similar process to computers and programs? I thought your point was absolutely fascinating in how you state “Could a computer have emotional intelligence because it is programmed to know emotions even though it does not feel itself?”. But I do want to bring up, how do we know that our friends and family even feel emotions? I can say that I feel emotion, as I'm sure you can say the same. But how can we exactly quantify it? How are we to say what can and cannot feel an emotion? Where do we start to draw the line?

      Delete

  33. 1.a What is computation?

    One thing that struck me while I read about mental imagery theory is the opposition between introspection and the way we remember information. To me, it felt like that was one of the clear distinctions between a human being trying to remember something and a computer delivering an output. The computer follows procedures to compute operations. It follows a path that could lead it to halt (or not) but it keeps going anyways whereas humans won’t necessarily have a strict line to follow to achieve their goals, and might have to review their way of thinking in order to be able to produce the desired output.

    ReplyDelete
  34. What I found the most interesting is how the computational theory of mind is sprinkled in throughout all three readings, in which the physical implementation is disregarded. When discussing physical symbol systems, it was proposed that not every level of a human needs to emulated but rather just the higher levels. This rules out the possible involvement of “lower” levels in the manipulation of physical symbols as general intelligent action. As for models, there have also been discussions of whether representations of worlds are necessary if agents could directly manipulate their environments. In this case, abstractions would not be necessary and would be entirely accurate. However, there are valid concerns of whether these models would be useful, despite being correct. Memoisation may also be difficult in these cases as state transitions would overwrite previous ones, prohibiting them from being saved for future use.

    In “Computation is a Symbol Manipulation”, Conery defines computation as a discrete process regardless of how it is executed (algorithm execution VS human interaction). Since the Turing Machine is an idealised device, it could also technically be implemented in a variety of ways with not impacting the operation itself (i.e., manipulation of physical symbols).

    ReplyDelete
  35. Connery offers quite an intuitive definition of computation: a succession clear-cut steps, or symbol manipulation, which produces as output the solution to a well define problem.
    Turing’s ground-breaking concept of a 'Turing machine,' comprising a head and a paper tape, concretized the idea of computation. The head scans and subsequently writes on the tape. This programmable head can compute a problem in real-time, processing each step, and recording the output on the tape. By means of this conceptual; machine, Turing offered the foundations to the development of modern computers.
    But it was pointed out that not all numbers are ‘computable’, that is, not all numbers can be processed by a Turing machine, this is the case for some real numbers.
    This begs the question: What would be the conceptual impact of a Turing machine capable of processing all real numbers? It is quite intuitive to anticipate that this would lead to more intricate computations, possibly yielding greater precision, and potentially leading to the potential of computing more nuance concepts, ones that might coincide more with the intricacies of the real world and abstract mathematical concepts. This loosely reminds me of the discovery of quantum physics, a new way of seeing foundational particles, in a way which allows for more accurate way of explaining real world events.

    ReplyDelete
  36. I found the idea of the physical symbol system hypothesis clearly presented and very attractive: “a physical symbol system has the necessary and sufficient means for general intelligent action”. The parallel between biological systems and computers seems interesting at first. But this hypothesis also implies that human intelligence must be a physical symbol system (necessary condition) and that machines could achieve general intelligence (sufficient condition).
    Does that also imply that human intelligence is merely a symbol manipulator? If this is the case, I find it challenging to endorse, especially in light of more recent findings in cognitive science. In the book I read that summer called The Strange order of things, Antonio Damasio emphasizes the role of feelings in human decision making and advocates the idea of embodied cognition (cognition is shaped by the organism’s entire body). Thus, replicating human intelligence in a machine might not be feasible without simulating the intricate interaction between the entire body and the brain.
    It appears to me that the influence of feelings, unconscious instincts and intuitions on thought, and decision-making processes cannot be aligned with the physical symbol system because they do not involve symbols or symbol manipulations. If I’m right, I find it difficult to support the physical symbol system hypothesis in the present days. Is there a way to reconcile these perspectives, or is the theory considered outdated?

    ReplyDelete
  37. I appreciated this week’s reading on The Turing Machine and I think it is a great starting point for this course. I first came across the Turing Machine in a Philosophy of Neuroscience course and wished that the topic had been presented earlier in the course as I think it nicely frames the concept of computation, which is a tenant of cognitive science.

    I also found Conery’s reflection on abstraction in “Computation is Symbol Manipulation” interesting. At what point are we satisfied with a “step” and don’t need to break it down into a series of sub-steps to understand it? Conery points out that at a certain level of reduction, it becomes virtually impossible to describe the symbol manipulations to change states. He states that we only reduce a step to a level where people can be convinced “that a step can be mechanized and performed by a machine”. This prompted me thinking about how we attempt to explain any concept/process (i.e., how mitosis works, how a car burns gas, etc.) at certain levels of abstraction. When someone doesn’t understand a concept, you can either increase or decrease the level of abstraction but if you keep reducing it to lower details, there is eventually a point where it is too difficult or even impossible to explain. This is also referenced in “What is Computation” by Ian Horswill where he states, “you can never describe a procedure “completely”; you can only describe it in terms of other procedures the reader already understands”. It is unsettling to me that we can’t boil down our explanations or computations to a definitive step.

    ReplyDelete
  38. From what I understand from the latter part of "What is a Turing Machine", uncomputable numbers are numbers for which it is not possible to construct a Turing machine that can compute them due to the absence of discernible patterns required for an algorithm to accurately generate all their digits. However, I am curious whether it's feasible to calculate an uncomputable number to a certain degree of accuracy or if we're entirely incapable of expressing any of their individual digits.

    Upon reading "Computation is Symbol Manipulation", I found it fascinating how, despite the brain's computation being vastly more complex than traditional digital computers, it is still fundamentally a computational process. This made me wonder whether it would be possible to create a computer that can perfectly replicate the human brain, but the example given by the DNA translation made me reflect on the gap between the biological principles of the brain and the algorithmic and logical aspect of computers.

    ReplyDelete
  39. Just like many others, at first I found ‘What is a Turing Machine’ a bit hard to understand, even though it is supposed to be rather straightforward as it is based on a set of symbols and rules. It reminds me of when I’m doing my computer science assignments, where my thought process is always more complicated than it needs to be. The solutions always turn out to be more simple than I thought, but I always overthink it.

    The readings made me reflect on how many implicit thoughts or processes that happen in our mind that we are unable to describe or explain. It’s difficult to break down our thought process into smaller steps, especially for the random thoughts that just ‘pop into our mind’ or our ‘intuition. I think that while computation does play a huge role in cognition, it involves so much more, and I’m eager to learn more about it.

    ReplyDelete
  40. Of particular interest to me were the criticisms made by the embodied cognition "camp" against computationalism in Horst's (2009) Encyclopedia entry. Perhaps this is a failure on my end to fully grasp the potential of computation, but I am curious to what extent embodied information can be computed as yet another host of symbols, particularly in a way that is dynamic rather than static. Although proponents of computationalism argue that this poses no problem, I wonder to what extent are the complex dynamics between agent and environment sufficiently accounted for under this model.

    ReplyDelete
  41. There were a lot of posts. Although I wasn't able to read everybody's comments, I will share the two cents that I have gleaned.

    Computation, from Conery's article, is a discrete manipulation process that uses symbols, following rules, and algorithms to solve a problem. The Turing machine, an early idealized binary computer model, encapsulates this concept. Still, although our minds do not follow exact rules like the computers we have created, nowadays, the question of cognition brings new insight into what computation can do. Advancements show computation, in a way, can go beyond numbers and syntax, encompassing abstract processes mimicking cognition. This evolving comprehension finds applications in neuroscience and complex problem-solving. As our understanding grows, the idea of computation will evolve too.

    ReplyDelete
  42. As a cognitive science student specializing in neuroscience, I haven’t been exposed to a lot of computer science related topics yet— which is why I found this week’s readings to be a challenging yet rewarding opportunity to familiarize myself with the computational side of the field.

    In the past, I had only briefly encountered the concept of Turing machines. My previous understanding of Turing Machines was superficial and limited solely to its hardware and basic mechanics; I knew that some sort of tape was fed through it, but the deeper meaning and purpose underlying it was still unknown to me. Thus, it was Copeland’s “What is a Turing Machine?” article that really caught my interest—it helped to fill these gaps I had in my knowledge regarding the Turing Machine’s purpose and contributions in relation to specifically the field of computer algorithms. For example, as Copeland explains, actions the machine carries out are based on its specific instruction tables (Copeland, 2000). As well, the Turing Machine being specifically a theoretical model grabbed my attention— if I understood correctly, it is a unique device that focuses on problem-solving, such as determining the outcome and limits when it comes to certain algorithm-based questions (Copeland, 2000). This idea of a model reminds me of the philosophical realism debate, where it has been argued that despite not being 100% applicable in all real-life situations or conditions, models still hold importance in better understanding issues in our world. The Turing Machine, although abstract, was still able to help solve problems that are relevant to modern computers— such as the idea of Universal Turing Machines that Copeland as well as Horswill highlighted in “What is Computation?” I’m curious as to whether there are other similar computational models that also had such an impact on the field? Based on “What is a Physical Symbol System?” by Poole and Mackworth, physical symbol systems seem to be similar to Turing Machines in that they both utilize symbols to develop more complicated patterns— are there other examples of similarities between these models? How do these compare in function, what are their disadvantages and advantages? To add, what are other unique ways in which Turing Machines have contributed to current knowledge?

    While “What is a Turing Machine?” by Copeland provide an insightful example on a groundbreaking invention, Horswill’s “What is Computation?” was also an interesting read in that it provided more historical context and background information on the field itself. For example, thinking of computation as belonging to an “information revolution,” as Horswill defines it, really helped put these concepts that were new to me into perspective. The idea that computation makes thinking easier really puts into words how important the field has been in shaping society— I’m curious as to what and how other factors also contributed to this “information revolution”?

    ReplyDelete
  43. This first series of readings has resonated a lot to me because it has reminded me of discussions on the movie "Ex Machina" that I have had in a previous class of consciousness.
    This movie presents Caleb, an incredible coder, who is chosen for an experiment, which happens to be passing the Turing Test to a new robot. The movie touches on the incapacity to distinguish what really differs between humans and robots, since the robot ends up acting, sensing, understanding as humans do; the only thing differenciating it from us being the electrical fibers constructing its entire person. It was also stated in the "Artificial Intelligence" text that biological and computational entities share common levels of functioning: knowledge (what agent knows) and symbols (how the agent reasons). Therefore, if they share similar levels of functioning and if they act so humanly, what could be the marker of a distinction between humans and machines? And I would say that it is the feeling of being. I don't think, as far as I know, that AIs are conscious that they are experiencing, feeling and being.

    ReplyDelete
  44. The readings for this week were very insightful for understanding the basics of what computation is. Conery's "Computation is Symbol Manipulation" helped me grasp the general idea, in which a computation is transforming an input to an output based on sets of discrete rules (i.e., algorithms), which are all physical in nature. It is generating a solution based on manipulating symbols, which are physical-based shapes. Conery mentions that these symbol manipulations are state-transitions and the size and the number of steps do not matter. The term "memoization" was described in the reading as like a saving option of the output to be used in a later stage of the computation (allowing for a single step). This was interesting to read as it immediately made me think about working memory, but I am unsure if it relates to it at all.
    Another aspect of Conery's reading that grabbed my attention (and confused me) was the relationship between computation and algorithms: It seems like algorithms serve as rules for a computation to generate the output/manipulate symbols. This made me imagine that algorithms precede computations in which an algorithm has the potentiality of becoming a computation? I'm not sure if this is the right way to view the relationship between them, so I was just wondering if that can be said.
    The last question I had is about these rule-based symbol manipulations that are always physical. Through my limited knowledge of linguistics, I suppose the basis of computation is similar to syntax in language, which is form-based and exist based on rules. Thus, computation is syntactic and doesn't do anything with meaning. I was wondering if it is possible encoding semantics into computation, or is this incompatible?

    ReplyDelete
  45. Based on these readings, one question keeps recurring in my mind: What distinguishes a concept or knowledge from being considered a weasel word? It appears that abstract knowledge and concepts can only be comprehended in an abstract manner, without the possibility of using specific examples to define them, as illustrated in the reading about the Turing machine. On the other hand, some concepts can be deduced to achieve a higher level of understanding, but this may also result in the loss of important details. How can we establish a clear boundary between the two? Perhaps individuals with exceptionally sharp intellects can comprehend it effortlessly, while the rest of us might be Lilliputians, not quite smart enough to truly grasp what computation entails.

    ReplyDelete
  46. I had to read the article “What is computation?” a couple of times to understand it. As I understand it, turning machines helps us understand the limits and capacities of computers. It certainly shows the complexity behind a “simple machine” and some similarities it has when comparing it to human cognition. What I found the most interesting about the article is the memory. The memory of a Turning machine is represented as an indicator that “can be set in any number of positions” with encoded instructions that are either symbolic or numerical. However, it does make me wonder how “flexible” these machines are becoming with all the computational advances and if they can reach human flexibility when giving an output instead of following predetermined instructions.

    ReplyDelete
  47. The reading titled "What is Computation" provided a fascinating and insightful overview of the concept of computation. However, I still find myself confused about a particular aspect: the distinction between a function and a procedure. It appears to me that the most notable difference between these two lies in the level of abstraction they operate at. In my understanding, a procedure is the process that allows you to go from input to output, while a function is a procedure that characterizes the relationship between a specific set of inputs and a specific set of outputs. In that sense, a function is more precise, as it states which inputs are linked to which outputs. Is this the only difference? Or did I misunderstand these concepts?

    ReplyDelete
    Replies
    1. I was a little confused about the difference between a function and procedure as well so upon trying to differentiate these two things, this is what I came up with. You are correct in that a procedure refers to a sequence of steps that dictate how to go from a given input to a desired output. They are described as algorithms, a set of rules that aid in the manipulation of symbols (computation–this is also mentioned in previous posts). They do not specify the exact input-output mappings like functions do as you said.

      In addition, I believe that functions are deterministic where for a given input, they produce a unique and specific output. While procedures can be both deterministic and non-deterministic, meaning they may have multiple possible outcomes for a given input. A function also assigns only one output to each input, ensuring that there is no ambiguity attached to the result. Procedures on the other hand may not have this one-to-one mapping (especially in non-deterministic systems). For example, the absolute value function f(x)=|x| guarantees a unique, positive result for every input of x (input: x=-5, it unequivocally gives us f(x)=5). Its procedure would be: 1. Take an input number “x” 2. If x is negative, negate it. 3. The result is the absolute value of x. The procedure outlines the steps to compute the absolute value but does not specify the exact input-output mapping explicitly.

      Delete
  48. I found this week's readings very interesting and in particular appreciated how they each helped me understand the others better. I read them in the order they are presented above so I found the article on physical symbol systems particularly helpful in understanding Turing Machines and why they are pertinent to this class. They are simply machines that make use of arbitrary symbols (the 1s and 0s that populate the squares of the potentially infinite tape) and a set of rules by which the machine manipulates those symbols. Computation is symbol manipulation (as discussed in the second reading of the three). Take some meaning carrying symbols and, given an internal state, apply a set of rules to them in order to transform them into other meaning carrying symbols. It seems so simple when written out in language but this practice generates amazingly complex behaviour in both computers and cognizers (I don't know whether I'm right to separate those two groups). This emergence of complexity from simplicity fascinates and confuses me and is a big part of my interest in cognitive science.

    ReplyDelete
  49. Reading Conery’s “Computation is Symbol Manipulation” instantly brought back memories of my first programming class back when I was 14. To my fortune I had a delightful teacher who had a ground up approach, starting us with a basic introduction to binary, then to logic gates, then to Python—where he introduced us to algorithms (where we did make a Sieve of Eratosthenes (likely what ignited this memory in the first place)). I found the definition provided to be charming and eloquent in a way I hadn’t encountered before—particularly the usage of “symbol,” which broadened the scope of the idea for me.

    Now, being a Psychology and Philosophy student, the article—particularly the metaphor of DNA translation—left me curious about neural processes and computation. Some questions it left me with: what did evolution of computation within the brain look like (natural selection shaping brain computation)? What would mutations be considered—errors in computation? Lastly, I’m curious about networks of people acting to preform computation. I’m left with the sense that we as humans are built by many, many small computations that have aggregated over time and evolved to the point where we are now. This thought poses even more questions for me that I look forward to satisfying in class.

    ReplyDelete
  50. One thing that stuck to me in “What is Computation” was how Ian Horswill mentions Alain Turing’s argument on how if a computer is able to fool humans into thinking it was a human, then it would have to be considered intelligent. Now, with the development of AI, it is getting at the stages of where it could fool humans into thinking it was human (i.e. a student passing ChatGpt answers as there own). This begs us to question what are the rights and responsibilities we should confer upon AI? With AI now being perceived as intelligent, questions arise regarding its entitlement to intellectual property rights. Furthermore, if AI advances to the stage of independently generating ideas and creative content, what ramifications will it bear for issues like ownership, copyright, and ethical considerations?

    ReplyDelete
  51. From my understanding of the readings and the skywritings, the questions that I found relevant to explore were the following: What is a Turing machine (and a UTM)? Why can we think of it as an idealized system? How does it work? What are some of its limits? What is(n’t) computation? Can the way that we compute, as humans, be replaced?

    1) A Turing machine is an idealized computing device. It consists of an infinite tape (serving both as a vehicle for input and output and as a working memory for storing results of steps of a computation) divided into cells, a programmable read/write head that can move left or right along the tape, a finite set of symbols (0 and 1s for example) that can be written on the tape (because they are finite, the machine can only perform a limited number of tasks), a finite set of states (that depend on current states and the symbol it reads on the tape), and a set of rules that determine how the machine operates (algorithms). It is “idealized” in the sense that it is a theoretical construct designed to simplify the study of computation - conceptualizing the capabilities and limitations of computers. We also know that this machine’s capable of realizing various types of atomic tasks. They include reading symbols, writing symbols, advancing the tape, changing states, and halting. The reading also accounts for the fact that some functions are uncomputable for the machine - precisely those that imply operations with uncomputable numbers (real numbers whose decimal representations are irregular and lack any discernible pattern). Turing finally presents us with the concept of Universal Turing Machines to explain that anything that can be computed algorithmically can be computed by a Turing machine.

    2) 3) Computation can be defined by a rule-based shape-based symbol manipulation, following rules that lead step by step to the correct output. An interesting idea that emerged from the reading was the one of behavioral equivalence to define computation (if a person or system reliably produces the right answer, they can be considered to have solved the problem regardless of what procedure or representation(s) they used). This raises the question of the replacement of our brains (computational systems) by Turing machines (other computational systems). The analogy with the cochlear implant system in the section on computational neuroscience provides a strong instance of how this can be done for our senses.
    Another idea that intrigued me was the one of the physical symbol system hypothesis which assumes that a physical symbol system has the necessary and sufficient means for general intelligent action. Even though the description of the levels of abstraction identified for both biological and computational entities made it logical to believe that both are similar, I believe (for now) that intelligence is a complex phenomenon that involves way more than symbolic manipulation (computation). Intelligent action might also involve self-consciousness (qualitative states) and the ability to unify internal registrations as our own states of awareness.

    ReplyDelete
  52. When we read or listen to a sentence, our brain engages in symbol manipulation, transforming words and meanings to allow for a coherent understanding. This involves multiple steps, from recognizing individual words to dissecting sentence structures and comprehending the overall message. Each of these steps are discrete computations, as per Conery's definition, where linguistic symbols represent words and concepts. This perspective urges us to examine step sizes, potentially revealing nuances within seemingly simple processes. Considering this, how might recognizing these symbol manipulations enhance our approach to understanding intricate language tasks? Furthermore, how does this view of computation influence our perception of problem-solving in various fields?

    ReplyDelete
  53. The chapter on Turing machines in computation highlights the fundamental idea that interpretation plays a critical role in information processing, which I see as a great starting point into a higher level study of cognition. Just as Turing machines demonstrate that the same computational task can yield different outcomes when executed with different instructions, human cognition underscores the variability in how individuals perceive and interpret the same data. This subjectivity versus objectivity duality in cognitive processing taps into the complexity of human thought. It suggests that even when presented with identical facts, the rich tapestry of individual experiences, beliefs, and biases results in unique and often divergent interpretations. Does the use of symbols create a simpler form of communication? Or a greater possibility of interpretations? Such aspect of cognition continues to be a compelling field of psychological research, and to draw me into exploring the nature of knowledge and understanding.

    ReplyDelete
  54. I accidentally put my skywritings for reading 1b in 1a comments, and 1a in 1b comments. Sorry about this.

    Here is my skywriting for reading 1a:

    “It could turn out that consciousness is not something that can be given a computational account”(p.36)

    I totally agree with it. Digital demonstration and computation are good for processing what brain is doing and for some technical uses. But in terms of abstract elements such as emotions, feelings and moods, which are already hard to expressed and explained by humans. How would the computer can process and generate related data. This just let me start to think, may be overthink that, would the computation process pass the ethical border line? It seems pretty private in terms of everyone’s feelings/emotions/moods, but imagine getting an exposure to what the brain is actually doing, this might be controversial.

    ReplyDelete
  55. What does it mean that computation is symbol manipulation? Symbols, representing an aspect of the intended modeled environment, are processed through an algorithm, or to put more simply, a set of rules. An algorithm defines the conditions of the input and the transitions necessary to then reach a respective output. These symbols, at the hands of the algorithm, are subject to change. The operation performed can vary in complexity, from one-step, such as with memoization, to incredibly complex with millions of operations performed on a symbol.
    The classic view of computation maintains that it is discrete, and well-defined. However, interactive computations push the boundary of this notion, relying on user input which can subject the state of the computation to change. The choice of the user is symbolized and then manipulated by the algorithm to produce an output, so it maintains the classic model of computation in that sense. Within these definitions of computation, an astonishing amount can be accomplished. Nonetheless, it does have limitations to what is computable. As a view of how the brain works, I also do not feel as though it covers all of our cognitive capacity, but is a leading off point to understand the system.

    ReplyDelete
  56. For me, the most striking component of the readings in this section was Conery’s example of DNA translation as an illustration of computation; at first glance, I found this connection –a real, biological process defined in terms of computation– as some sort of eureka moment. The author is providing an example of a biological computation defined by a biochemical process (as opposed to an algorithm), with varying levels of abstraction. By this logic, human cognition and the processes in the brain that give rise to the mind could also be explained in terms of computation. However, upon further thought, I realized that this analogy was still lacking, as explanation alone does not amount to understanding. Computation, whether defined by an algorithm, a biochemical process, or any other logical form, at most gives us a detailed set of instructions for how certain abstract concepts connect. It does not give us a fundamental understanding of these abstract “nodes” or “bits”, hence to me, it lacks true meaning. This is a critique of the computational theory of mind as a whole, however, I found it to be particularly relevant to this example.

    ReplyDelete


  57. From my understanding, a Turing machine is similar to a computer. Hypothetical in its nature, it is able to compute anything. The Turing machine can understand to most complicated mathematical structures using the simplest model of computations but is limited to the finite.
    This brings three concepts to mind: Infinity, the big bang, and the limit of human cognitive abilities. No Turing machine, no computer, no human can understand concepts that are continuous, that are infinite. This bring me to the question: As the development of AI continues to evolve, could the ability to comprehend the infinite be the defining factor in creating machines that surpass human cognitive abilities?

    ReplyDelete
  58. I found the notion behind Simon and Newell’s Physical Symbol System Hypothesis to be the most interesting aspect of these three readings, particularly in it how it states that being a physical symbol system is necessary and sufficient for intelligence. In their paper “Necessary and Sufficient”, Yoshihara and Yoshihara believe that the terms necessary and sufficient are misapplied in modern day science and lead to flawed conclusions. They claim that the conditions that determine whether a given property X is necessary and sufficient for a given phenomenon Y deal with different issues and have no logical rationale for why they are associated. In their view, misapplied logic leads to misinterpretations and erroneous conclusions drawn from data. One pitfall associated with misapplied necessary and sufficient conditions is that failing the necessary condition does not immediately mean that the relevant property is not unimportant, and it may be more of a test of redundancy. Another pitfall is that it simplifies the truth, specifically in that the sufficient condition may inaccurately suggest that no other factors are needed for a specific phenomenon. In reading the Physical Symbol System Hypothesis as claiming that a physical symbol system is necessary and sufficient means for general intelligence, I wonder whether Newell and Simon fell into some of the pitfalls associated with misapplying those terms, as outlined by Yoshihara and Yoshihara. I’m inclined to believe that this is the case, specifically that their claim inaccurately suggests that a physical symbol system is the only thing required for intelligence, as much of cognition is not computational or mere symbol manipulation (such as understanding, according to Roger Penrose).

    ReplyDelete
  59. I found the 'UBIQUITY SYMPOSIUM 'WHAT IS COMPUTATION?'
    COMPUTATION IS SYMBOL MANIPULATION' article to be the most thought provoking of this weeks readings. Specifically, its focus on 'agents' and how in the case of DNA, the computation of DNA translation is not completed by an algorithm but, "a biochemical process is the agent responsible for the state transitions." I find it fascinating how the definition of computation, being simplified to "state transitions", allows for us to look at the natural world and find process which fit this definition. I wonder what other examples, like the DNA one, can be found in the natural world to further serve as metaphors of computations?

    ReplyDelete
  60. As a student whose research interest is in artificial intelligence intersection with the brain, I am more than happy to share some of our thoughts.

    Currently, there’s a huge gap between computational neuroscience and AI. Very few researchers try to apply neuroscience techniques to boost current language models like GPT-based models. This week’s reading and video is helpful.

    First of all, the trending Artificial intelligence in Natural language processing (transformers), computer vision(CNN and transformers and diffusion) reinforcement learning (reward-based training) did not benefit from current development techniques, per se. Even though some scientists like Geoffrey Hinton attempted to align neural network structure with the brain, the majority of the neuroscience community has paid more attention to proving the “biological plausibility”. Referring to the reading, “The models are judged not by whether they are correct, but by whether they are useful. ” in “What is a Physical Symbol System”, which the sentence is also cited by one of our classmates, expertise in AI tends to make the current AI more computational available in order to work on chips (most likely GPUS) through mathematical and physics approaches. Even if the structure in biology does not make any sense, these structures are useful in many fields like painting(diffusion which is inspired by physics), and answering questions (Chatgpt) still displays strong ability and infinite potential. Though some architecture may be inspired by the brain from a broad perspective, and I may discuss it in more corresponding skywriting.

    ReplyDelete
    Replies
    1. Secondly, In the “What is a computation” video, an opinion that draws my attention is the computation between input and output groups, the human brain and silicon can both be considered as symbol manipulation systems. Since the human brain now remains unsolved, even during past decades researchers have better understood some mechanisms such as the auditory system, visual system, and emotional system, but a more detailed and unified theory has not been developed. The same thing in the Artificial intelligence field, there are billions of parameters, and even in each layer, how the neurons in a neuron network behave during backpropagation(the most common training approach ) is still a mystery. Thus, the alignment problem between two black boxes makes it hard to simply use one technique to stand or represent another. Therefore, there has been a long way to go to bridge the gaps between the current statically driven AI system and computational neuroscience. The future direction is to crack one of the two black boxes and utilize the knowledge to interpret the unsolved one. Finding connections between human computation and brain computation might crack the mystery for both systems.

      Delete
  61. I do see how it's crucial for me to understand the readings in the context of modern day society. I think with the integration of Artificial Intelligence in our daily lives more commonly than ever, we need to understand the limits of a computational mechanism, and it sort of leads me to think about if cognition works in a similar way? I think we sort of have this lack of understanding of the systems that govern cognition, and what are their limits. Like if we apply the Turing machine's capabilities in a psychological manner, we can use to test the extent of which a machine can behave, act and think like a human. If a computer is capable of engaging in dialogue with a real person without being recognised as one, we can assume that it exhibits human-like cognitive ability. It's important to recognize through this reading that a turing machine operates based on reading input symbols and computing them to create an output, so, do our brains work the same way? through symbol manipulation and multi-layered complex processes that are almost limitless? where do we draw a line between human intelligence and computer intelligence?

    ReplyDelete
  62. Like Salma said above, I was most intrigued by the principle of behavioral equivalence in the “What is Computation” reading. I wanted to address Salma’s question of whether it should even be considered a computation if the produced output is undesired or wrong. From what I understand from the readings, the principle of behavioral equivalence is a key component of how we interpret the modern idea of computation. To us, it is almost unfathomable that two computers (let’s take a Mac and a Windows Surface) would give two different answers for the same computation. Even though they may take use different procedures and representations (which makes sense due to their difference in OS), they will still provide us with the desired answer. However, as the “What is Computation” reading denotes in “The Functional Model” section, computation at is very basis is merely the process of deriving the desired output from a given input(s). Therefore, even if an undesired output was to be produced, it can still most definitely be considered a computation.

    ReplyDelete
  63. In this reading, a question related to the psychological and social issues aspect of computation that was brought up is the accessibility of computational systems. This is addressed in user interface design and various design guidelines exist, such as the color contrast checker. The problem is that implementing accessibility seems more often than not to be done after new computational systems are made, therefore there is always a gap in user groups.

    ReplyDelete
  64. In this set of readings, I think the concept of behavioural equivalence was highly fascinating for a variety of reasons.

    In the “what is computation” reading, I found Turing’s definition of intelligence as “computational equivalence with the human brain” particularly interesting. Turing's definition actually surprised me, since I’d always thought of intelligence as the feeling that occurs when one effectively manipulates information. For me, intelligence is closer to the feeling of “eureka” when a particularly complex symbol manipulation is carried out, as opposed to the actual behaviour of complex symbol manipulation. I suppose for me, I feel as though intelligence is the qualitative experience associated with computation, while Turning restricts intelligence to strictly a behavioural phenomenon. Perhaps Turing set his definition of intelligence so he could avoid having to deal with the problem of other minds as applied to a machine - Turing avoids having to consider the existence of qualitative experiences in “intelligent” computers by restricting his definition of intelligence to what’s behaviourally visible.

    Furthermore, from the same reading, the question of whether a PC can become behaviourally equivalent to a human brain is briefly raised. My gut reaction is that it can for the following reason: In the “What is computation” reading, emulators are shown to allow computers to run other computers even if the computer being emulated runs on a fundamentally different structure of functions from the computer doing the emulating. This is possible because both computers have a shared unified metarepresentation which sits at their base - binary code. I think it is conceivable that the human brain will never be able to be represented in a binary system, however, I think it is possible that human brains and PCs still share some form of unified metarepresentation since they are both extended in time and space. Therefore, assuming the laws of physics are unified, and that the laws of physics apply equally to both the brain and PC, there must exist a unified meta-representational system that underlies both the human brain and the PC. Therefore, I think the fact that since PCs and brains both exist in space and time, they have a shared metarepresentational system that should allow the brain to be emulated by a PC.

    ReplyDelete
  65. As a student in Math & CS, I was familiar with a lot of what was discussed in the writings. I will comment, however, on “Computation is Symbol Manipulation”, though part of the skywriting will still depend on the first reading, “What is a Turing Machine?”.

    It is interesting to me that we can represent a computation as having a short, and certainly finite, algorithm, even though it manipulates analog values. For a Turing machine to compute a + b with infinite precision, for example, would take infinite time, but describing the algorithm that accomplishes this task is finite, and therefore counts, if my understanding is correct, as computable. As for calculating pi on a Turing Machine: the algorithm is finite since pi is computable (with infinite series and such), but the solution is found in an infinite number of steps.

    It had previously seemed to me that, if the laws of physics cannot be discretized, ie that positions, and other values of particles in the universe were analog, and had infinite precision, then that would mean that the laws of physics themselves, and the conditions of the early-universe, would need to be equally infinitely complex. The two earlier examples of programs on a Universal Turing Machine show that this is not so: The universe may be computable, despite its computation requiring infinite computing power, and it being completely analog. Moreover, the beginning conditions of the universe may be discrete and finite, and create a universe of apparent, infinite complexity. The infinite complexity is merely apparent, because this complexity can be computed by an infinite process that can be described in a finite number of bits.

    This does not have any direct practical significance, but it now clicks for me why an analog universe can therefore be hypothesized to be computable, and have finite Kolmogorov Complexity. That is quite fascinating.

    ReplyDelete
  66. **BLOGGER BUG**: ONCE THE NUMBER OF COMMENTS REACHES 200 OR MORE {see the count, at the beginning of the commentaries] YOU CAN STILL MAKE COMMENTS, BUT TO SEE YOUR COMMENT AFTER YOU HAVE PUBLISHED IT YOU NEED TO SCROLL DOWN TO ALMOST THE BOTTOM OF THE PAGE and click: “Load more…”
           ________________
              Load more…
           ________________
                  ——
    After 200 has been exceeded EVERYONE has to scroll down and click “Load more” each time they want to see all the posts (not just the first 200), and they also have to do that whenever they want to add another comment or reply after 200 has been exceeded.
    If you post your comment really late, I won’t see it, and you have to email me the link so I can find it. Copy/Paste it from the top of your published comment, as it appears right after your name, just as you do when you email me your full set of copy-pasted commentaries before the mid-term and before the final.
                  ——
    WEEK 5: Week 5 is an important week and topic. There is only one topic thread, but please read at least two of the readings, and do at least two skies. I hope Week 5 will be the only week in which we have the 200+ overflow problem, because there are twice the usual number of commentaries: 88 skies + 88 skies + my 176 replies = 352!. In every other week it’s 2 separate topic threads, each with 88 skies plus my 88 replies (plus room for a few follow-ups when I ask questions.

    ReplyDelete
  67. REPOST - Blogger Deleted Original

    Kaitlin Jewer September 6, 2023 at 6:03 PM
    1a. What is Computation?
    Would the collective functioning of humankind’s thoughts and feelings be considered a universal
    Turing machine? Every computational device was invented from the human brain which had to
    understand the primitive symbols to create it. So if the collective human sentience can compute
    everything, are we collectively a universal Turing machine? Modern day computers compute faster
    than the human brain, which frees up more time for humans to create novel Turing machines.
    T1-4 were mentioned in the first lecture. What is meant by T1-4? Is this related to Turing
    machines?
    Conery’s paragraph about indefinite computations made me wonder whether the human brain is
    an indefinite computational device. Does the computation begin in utero and continue indefinitely
    until death (or beyond if you believe in an afterlife)? Are the primitive symbols written in the zygote
    and the computation becomes more complex as the organism experiences life?
    In cognitive science today, do people believe that sentience can be explained if we could
    completely reverse engineer the human brain? Or are there researchers who believe there is an
    unknown x factor that cannot be reverse engineered? This topic made me think about people who
    are brain dead. If researchers could develop a device (such as a Neuralink) that could mimic
    every brain function (once the brain has been 100% reverse engineered) and used it on
    somebody who is considered brain dead to bring them back to consiousness, would they be
    sentient again or would they behave like an AI which has been programmed to act like a human
    (behaviourally equivalent)?

    ReplyDelete
  68. I found this set of readings to be particularly intuitive, and I feel as though I have a solid grasp on what computation is and how it relates to Turing Machines (possibly because I have studied these concepts before). As I understand it, computation as symbol manipulation essentially means that we can characterize a problem as an arbitrarily large set of states, each of which is represented by a set of symbols, which form what is called a problem space. A computation, which is a solution to a problem, is therefore characterized as a sequence of steps that take as input the current state (and its representing symbols) and produces a corresponding output state (and its representing symbols). The process that accomplishes this is a specific function that transitions from one state to another, according to a predefined set of instructions such as “If State A, then State B”. This sequence of steps, which characterizes the computation, allows for the transition from the initial problem state to the final goal/solution state, through the successive transitions between states internal to the larger problem space.

    It is intuitive to see then that what a Turing Machine performs is exactly computation. The read-write head takes as input the current internal state of the head and the symbol on the tape, and according to a predefined set of instructions, carries out a certain behavior that navigates the problem space in an attempt to reach the solution. The behavior could be to change the symbol, change internal state, move across the tape, or halt, in which case the goal has been reached.

    ReplyDelete
  69. (My original comment from September seems to have been deleted or failed to post at the time)
    What struck me throughout all three readings was the continued link between these computation machines and the brain. First, in the reading “What is a Turing Machine?” the tape of a Turing machine is described as a “working memory”; I’ve only taken one computer science course so I apologize if this is common terminology in the field, but personally I’ve only used the term in reference to the brain temporarily holding onto information to use for a task. Based on its use here, it seems that we have sought to emulate brain functions within computers from the start, in this example by ascribing a memory system to a machine. Similarly, the second reading analogizes following an algorithm to DNA replication. The third reading refers to brains and computers as being physical symbol machines, capable of making, changing, and destroying symbols. Again, I’m struck by how tightly linked our understanding of computation and neural processes are. Overall, I find it interesting how closely linked our understanding of the brain has been to technology. For example, in line with the rise of hydraulic powered machines 16th century scientists believed the brain was powered by fluid humors pumped into the brain and then pumped out into the nerves, controlling movements. Now we have the notion of the brain as a computer, though I wonder if this is a sign of advancement, since the focus has seemed to switch to creating AI that replicates our brain.

    ReplyDelete
  70. *REPOST*: I accidentally posted my answer to 1a. under 1b.

    For this week’s readings, I noticed that as I read each article, I was learning more about each topic but realized that the deeper the article dove into each topic, the more I realized I didn’t understand. The first reading, “What is a Turing Machine?”, was the easiest to understand of the three. From my understanding, a Turing machine is a computing device that has the ability to manipulate a finite number of physical symbols (instructions are assigned to each symbol by an algorithm(?)). I think I can grasp the very basic idea of a Turing machine, but something that I would like to learn more about are the limitations of a Turing machine besides incomputable numbers and functions (which may be related to different types of limitations). After reading the second article, “What is Computation”, I am very curious to know what fields can be boiled down to computation. For example, the reading uses DNA translation as an example of computation, but I had never even considered that human/biological processes could be essentially simplified into an algorithm + computation. More specifically, I am interested in which characteristics of human behaviour/function can be excluded in the formal definition of computation, and why.

    ReplyDelete
  71. Reupload:
    1. I was very interested by the idea presented in the Pylyshyn (1980) reading that one fundamental question in cognitive science is understanding how constraints on the physical level could help us understand the way that cognition occurs on the semantic, or knowledge level. In thinking about this, I got on to wondering not only how the physical system facilitates cognition, but how it may shape it, which I suppose brings us to the mind body problem. If we agree with the premise that the cognitive capabilities of a thinking organism are facilitated by the physical system that mediates them, then understanding what binds the mind to the body is critical for understanding cognition.

    ReplyDelete
  72. ** Repost **
    As someone who does not have much background in computer science or technology in general, I was a little worried about understanding these readings. However, I enjoyed how John Conery broke down the rather complex idea of computation into simple terms, defining computation as a series of changes in a system. This system is defined by a group of symbols. Whether these changes are caused by lots of detailed symbol actions or just one smooth change from something like an analog device, it doesn't matter. For someone who has done more physiology/neuro classes in cogsci, the DNA analogy really helped me further understand.

    ReplyDelete
  73. I am doing this skywriting late for this week based on the notes I took on the “what is the Turing Machine”. And since then, we spent a lot of time on the Turing machine, so I now understand better how it works. The reading basically introduced us to what was the Turing machine, and I remember when I first read the article being confused about what it did, how it did it, and why it mattered. But the way it is constructed, with the 6 fundamental operations it can do, has the goal of offering a theoretical explanation as to how the mind works. This means that while the mind receives sensory input from the world to give an outward behavior, parallelly, the machine, at a specific computational state reads the symbol under its head (input), interprets it, moves its head based on the interpretation and writes down another symbol (output) thereby changing its computational state.

    ReplyDelete
  74. If I understand "algorithm" correctly, I believe that human grammars (generate and well-formed expressions and distinguish them from ill-formed expressions) are algorithms. If, for example, a sentence one hears is ungrammatical/ill-formed (e.g., "The man sent to the woman"), one would not immediately derive a meaning from the sentence, instead having to resort to pragmatic reasoning to infer the meaning of the sentence or not understanding the sentence at all (i.e., there is nothing to be understood...). Ungrammatical sentences are "sentences" in a sense nonetheless, however they are not sentences that we care about because they are not interpretable. This is an example of computation describing at least part of what our brains do. Professor Harnad commented on this page, "Computation is a candidate for being that explanation (causal explanation of how our brains are able to do the kinds of things we can do), but there are other candidates". I tried to think of other candidates but all of the explanations that I thought of seem to fall short of the computation-explanation in one way or another... so what exactly are these other candidates?

    ReplyDelete
    Replies
    1. The first candidate that came into my mind that can explain our brain is the transmission of chemicals from one neuron to another. Information is transmitted across neurons in the form of electric impulses and neurotransmitters. However, on second thought, I think this is also a form of computation according to Conery's definition. In this case, the symbol would be the on and off of a neuron, and a collection of a neuron makes up a state of the brain. The computation would be shifting the brain from one state to another, using one group of neurons to excite or inhibit another group. This is also proposed by Marr in his three levels of analysis (computation, algorithm, implementation). For example, Marr suggests that for vision to function, the information sent in through the retina has to be processed through multiple steps, and this is a form of computation. I think a more plausible candidate other than computation could be evolutionary theory, which explains our behaviour by survival of the fitness. Our brain is structured in this particular way because this allows us to perform what we can do, so that we can survive in this environment.

      Delete
  75. In “Computation is Symbol Manipulation”, Conery gives a definition of computation as being a discrete process where a sequence of states is defined by symbols and the transition from one state to another is the result of some process or body of processes. It is explained that a process could vary from an algorithm that can be executed on a computer to a biological system found in humans in which its states can be represented symbolically, like DNA, which was an example given in the text. As a Cogsci student who has avoided taking the one computer science course I need in order to fulfill my credits, the word “computation” was always scary. However, it was interesting to see how dynamic the definition of computation really is and how it can be used to explain many different processes, not just ones found in a computer.

    ReplyDelete
  76. --REPOST-- comment was deleted first time around:
    In reading the texts for the week, the idea that took up the most space in my mind was how inextricable the concepts and terminologies derived from the field of computer science are in the field of psychology. While this may seem like a redundant statement, especially given the nature of the course, I think modern vernacular has systemized words like "recall," "process" and so forth, to the point where the layperson might take it for granted. In Ian Horswill's text "What is computation," he touches on the field of computational neuroscience, which gleans its conceptual frameworks for understanding the human brain from the logic of computer science. He uses the functional model of computation, which asserts that if two different programs' outputs for a given question are the same then they are behaviorally equivalent, to underline the behavioral equivalence between the brain and a computer. In the wake of AI, drawing an equivalence between a mind and a computer might result in some heated debates, but Horswill reassures us with the not-so-reassuring caveat: in order to say that a computer operates like a brain and a brain operates like a computer, we would have to map out the entirety of the brain's neural network. Even if this were possible, which, given the number of neurons and their connections' redundancies, seems like a lot to ask, the computer's simulation of the mind would only be as good as the mind it's simulating.
    - Still, while reading this text and studying the little computer science I can get from COMP204, the idea that a brain COULD be computer is downright fascinating. If we take the idea further, one could imagine brain regions like the amygdala like a self preservation program, or individual neurons like transistors speaking in binary (0 for no action potential, and 1 for an action potential).
    I also wanted to note that given the paper's publication date, it could not have taken under account the body of literature which studies the gut as "a second mind," ever elusive. This might open a door, on the day that a computer manages to simulate a human mind, and finds that there is still something missing.

    ReplyDelete
  77. What struck me most about the Turing reading was the unimportance of the machine's physical construction. And what I found most interesting about the second reading was what it might imply for language learning as computational. Most of my thoughts were centered around the last reading. This paper begins by defining “physical symbol systems” (PSS) as having necessary and sufficient means for intelligent action. The word symbol is defined, so is the symbol system and finally what a physical symbol system is, however, this keyword “intelligent” is not defined aside from as something which can describe the action of a PSS. The word action isn’t defined either, I’ll let that slide, but not without commenting on the claim that symbols must affect action. Is it not enough for a symbol such as a thermometer reading to be simply enjoyed and understood? Or are we considering the action potential which underlines understanding as “actions” in themselves. Along that line of thought: I do not understand what is meant in lines 7&8 when it is claimed that symbols in a PSS are physical things contained in brains, is this referring to the neural readout from symbol recognition? Or symbol visualization?

    ReplyDelete
  78. Majorly, this reading made me reflect on the types of symbols and models that we use in science and in everyday life and how these symbols whether they be analogue or digital map onto what they are trying to model. Specifically, when the text claims that models are not judged based on correctness but usefulness it made me think of the Bohr model of the atom (this idea of undependable yet useful models maps onto the popular argument for realism). Lastly, and what I consider most interestingly, I could not help but wonder: is language a symbol for meaning? Or speech? Does the level of abstraction serve a similar purpose in models as it does language learning?

    ReplyDelete
  79. Considering the reality that computation is purely syntax, that it holds no semantic information and is dictated purely by rules applied to arbitrary symbols which hold no real attachment to what they symbolize, it makes you think how someone can compare this to cognition. Surely there is room for understanding semantics in the model of the human machine (if there weren't the case I would REALLY have a hard time understanding the readings this week). So would one point that there is a space for semantics in more complex non living symbol manipulators? Or would you have to agree with a duelist ideology that this semantic understanding is somewhere outside of a human machine?

    ReplyDelete
  80. In the class, the professor mentioned that cognition is a weasel word. It really means what is going on in our brain. According to computationalism, cognition is purely computation, in other words, symbol manipulation.
    Connect to the third grade teacher example, we abstracted the teacher’s information into symbols, and stored it in our memory (the type). What we are trying to answer is how to retrieve the information for storage (how the reader works)?

    ReplyDelete
  81. I’m wondering that individuals who receive the same input may produce very different output. Does this mean everyone has slightly different algorithms, or what are some factors that affect the manipulation process? I think this may imply that cognition is more than just computation. (my argument could be not valid, since the only thing we are sure is our own sensation, we can’t really know if others have different sensation or if they have sensation at all.)

    ReplyDelete
  82. In the article “What is Computation”, Conery defines what computation is and shows how it is related to an algorithm. He defines computation as a sequence of state transitions, which is carried out by an algorithm. He further explains that it is the algorithm that enables computations to seem like a “sequence of states” rather than a disconnected leap from input to an output. Conery also defines what an algorithm is, where he states that an algorithm is what enables us to break down a complex problem into its smaller components and that the working of an algorithm is what assures us that computation is working well towards its final solution. This reading was really fascinating to read, since it defined the concepts that I have been using all the time. It was interesting, especially because I always used the word “algorithm” without knowing its real definition.

    ReplyDelete
  83. *REPOST*

    - The continuous “begging the question” of what constitutes a computation, a step, a state, forces us to interrogate the rigor of distinctions and whether they are so distinct. Like Denning’s point about continuous physical processes possibly being computations is an interesting one and brings forth the line of reasoning which leads us to assert that brains are computers, which can offer nice analogies yet might take us backwards if our goal was to find nice bounding definitions the relevant terms…
    - Thinking of the mind and silicon based computers as both symbol manipulation systems again offers a helpful parallel and allows for hope to live that a human-like AI can be created not by focusing on the lowest levels of abstraction—neuronal connections and chemical makeup—but instead on the construction of higher order principles such as symbol usage and manifestation.

    > "…we conjecture that you do not have to emulate every level of a human to build an AI agent but rather you can emulate the higher levels and build them on the foundation of modern computers. This conjecture is part of what AI studies."

    ReplyDelete
  84. The paper "Computing in Cognitive Science" by Zenon Pylyshyn explores the impact of computers on the study of human cognition. The author discusses the various roles that computing plays in cognitive science, from being an instrumental tool to express theories to the empirical claim that cognition is a form of computing. The paper also examines the evolution of the influence of computing on cognitive science over time. The author argues that the use of computers has led to a shift in the way cognitive science research is conducted, with a greater emphasis on computational models and simulations. The paper also highlights the limitations of the computational approach and the need for a more comprehensive understanding of cognition that takes into account the biological and environmental factors that shape human cognition. Overall, the paper provides a comprehensive overview of the role of computing in cognitive science and its impact on our understanding of human cognition.

    ReplyDelete
  85. I think the reading “the computational theory of mind,” specifically the debate over connectionism’s biological plausibility raises interesting questions about the balance between model realism and computational efficiency. How closely must neural network models be to biological processes for it to accurately reflect the biological processes it tries to emulate? Also, the challenge of explaining rapid computations in a brain of relatively slow neurons prompts discussions on the nature of mental speed and the computational strategies used by researchers. How can classical and connectionist computationalists address this question of understanding cognitive processing speed?

    ReplyDelete

PSYC 538 Syllabus

Categorization, Communication and Consciousness 2023 Time : 8:30 am to 11:30 am Place :  Arts W-120  Instructor : Stevan Harnad Office : Zoo...