LMU ☀️ CMSI 2310
LANGUAGE, THOUGHT, AND COMPUTATION
HOMEWORK #4 PARTIAL ANSWERS
  1. Many people tend to believe that "thinking" is the exact opposite of "doing exactly what one is told," when in reality these are not mutually exclusive. A computer can be programmed to navigate from one (mental) state to another based on a set of rules that take into account all sorts of information. This is what people do when thinking; they use the knowledge in their current mental state to move to a subsequent state. We have no evidence that we are not "following a program" in our own thinking -- programs involve decision making and repetition (if-then statements and while-statements). The infinite number of ways the program could play out are often wholly unexpected to the programmer; in fact, one can often hear a programmer exclaiming "the program has a mind of its own" (or at least "WTF?!?").
  2. Examples of systems whose components have a canceling effect on each other: (1) overtones in music, (2) molecules in a liquid, (4) waves or particles of light, and (4) electrical current. Examples of systems whose components' behavior have huge high-level consequences: (1) computer programs — simply replacing a period with a comma could drastically change the meaning of a program, (2) finance — adding a zero or two to a balance or amount dramatically changes one's situation, and (3) DNA. Another interesting answer: the bullet that misses the heart by one millimeter.
  3. Five examples of epiphenomena:
    1. Consciousness (maybe)
    2. Gravity
    3. Light
    4. Spirituality
    5. Experience
  4. Achilles sees the sole MU at the top-level of the picture. The Tortoise realizes that the whole picture is made up of hundreds of tiny "MU"s.
  5. Signals and symbols are components of complex, multilayered systems. (Note: a non-multilayered system is like a container of gas; you just have molecules and the gas itself. An ant colony is multilayered: ants, castes, teams, teams of teams, etc., up to a whole colony. The intermediate levels display organized activity.)

    Signals are used to transport "messages." In an ant colony, signals are the lowest-level teams. They form to carry out some activity, such as food gathering, which looks purposeful from the point of view of the colony, but meaningless from the point of view of individual ants. Signals can only form when a critical mass of components (such as ants) are present.

    Symbols are levels at which meaning is present; in an ant colony, active symbols are teams of "sufficiently high level." Active symbols are distinct from passive symbols, which just sit around waiting to be processed. One can say "signals are to symbols as letters are to words." The key idea is that symbols trigger other symbols.

    Conscious systems operate only on the levels of symbols; they can not see the lower levels, e.g., signals.

  6. The mind has an easy time intermingling fact and fantasy because our representations of people, places, events, etc. are intensional, rather than extensional. That is, the representations (descriptions) do not have to be tied to specific objects. They can just as easily be attached to real or imagined entities.
  7. Achilles for some reason isn't able to see that an "infinite number of truths" can be specified finitely. But we know what recursion is, so we know that this is indeed possible.
  8. Python FTW!
    def two_to_the_three_to_the(n):
        return 2 ** (3 ** n)
    
    Technically, this really isn't the best answer. Hofstadter designed BlooP to express bounded-loop computation where the basic steps were extremely simple. BlooP doesn't have exponentiation, but we can do exponeniation by performing multiplication in loops. For this problem, the loops are indeed bounded. Here is a translation into Ruby that is much closer to the spirit of the original BlooP:
    def two_to_the_three_to_the(n)
      b = 1
      n.times do b = b * 3 end
      a = 1
      b.times do a = a * 2 end
      return a
    end
    
  9. Searle would likely not consider passing the Turing Test to indicate thinking. We know from his Chinese Room argument that he would certainly not use the word "understanding"; we can only really guess how he would use the word "thinking" (at least at this point in the course). However, we can employ an analogy here. Because Searle says that communication does not necessarily imply understanding (Chinese Room), he would probably say that imitation (the Turing test) does not imply thinking. Just an educated guess; perhaps we should just ask him.
  10. The program (or machine) needs a representation of dogs and windows that includes a large number of their physical and behvioral properties as well as the notions of what it means for humans to see and want and press [their noses against something].