Wednesday, October 28, 2009

Conceptual Halo (Preface 5)

In this section, Hofstadter discusses the conceptual halo and an expansion of his Seek-Whence program into his Copycat project. The conceptual halo seems to be nearly identical to the conceptual sphere he talked about in previous chapters. The First Lady of England example or the words in different languages could be moved into the section where he first discusses the conceptual sphere and I don't think it would make any difference to the reader. I am not sure why he decided to change the name from conceptual sphere to halo with what seems to be the same idea. The only difference is how he gives examples with the "slips" where a person may say the wrong word or combine two words when they have two parallel thoughts running in their head. But this is merely just a new kind of example that he brings up.

Hofstadter then introduces the groundwork for another one of his projects, Copycat. The Copycat program seems to take the base idea of patterns from Seek-Whence and combines them with the analogy-making idea he discussed in the previous chapter. Then, for which seems like nothing more than a "why not?" reason, Hofstadter changes the patterns of numbers to letters. This project seems more interesting than the previous Numbo and Jumbo projects, just because it has more of a puzzle or riddle feel to them. It could just be that these problems are presented in a sentence form, but it just seems like it has a more interesting feel to them.

The Influence Of Perception (Chapter 4)

In this section, Hofstadter discusses the idea of perception, representation, and analogical in relation to the human thought process. I found his discussion of perception in the first part of this section to be most interesting. Hofstadter briefly talks about low-level perception, which is mostly made up of processes we do not mentally control, such as visual and hearing sensors. Low-level perception obviously gets more complicated than our eyes and ears, but Hofstadter does not go into much detail about it.

Hofstadter goes into much more detail about high-level perception and the influences that effect this. This is what I feel makes perception more interesting than representation and analogy, the characteristic that it seems to be the most flexible and complex idea of the three. Some of the influences that can change one's perception of an idea or object that Hofstadter brings up are belief, goals, and context. Whether it be a preconceived notion, knowledge of the situation, or simply the context of a situation, these all can radically change the perception of something in the blink of an eye or from person to person.

This high-level perception also directly relates to previous sections in Hofstadter's book. One of these being the conceptual sphere Hofstadter talks about in earlier in the book. The example on page 174 which uses a piece of paper being viewed as a writing platform or a combustible material can be used to create a conceptual sphere. With the piece of paper being the core, all the different possible perceptions make up the rest of the sphere.

A second relation can be made to the Numbo and Jumbo problems. One of the main difficulties in designing these programs was to consider how different individuals may get different answers to the same problem. This is a prime example of how a persons knowledge on the subject has an influence of how someone perceives a problem.

The Eliza Effect (Preface 4)

In this section, Hofstadter talks mostly about the "Eliza Effect" and relating it to other critically acclaimed programs in the field of artificial intelligence. Hofstadter defines the Eliza Effect as, "The susceptibility of people to read far more understanding than is warranted into strings of symbols - especially words - strung together by computers". In short, this means that people tend to give computers too much credit in terms of it's "intelligence" rather than it's systematic functionality. To a computer, a string is a string, whether that string is a name or a incomprehensible combination of random letters, has no meaning to the computer. This is a misunderstanding that the general public has.

Much of this section is used by Hofstadter to demonstrate that exact point. In particular, Hofstadter uses the ACME program, which received much praise, to demonstrate this. The ACME program takes a set of objects and predicates, and strings them together to create a pattern or "story". Hofstadter makes his point by merely substituting these some of these strings for single letter representations. This makes the resulting pattern look like nothing more than random clutter. While the programs Hofstadter mentions in this section do accomplish something are deserve some credit, this Eliza Effect gave way to them being viewed as something much greater than they really are.

Two Blogs Posted On Wrong Site

The Numbo Part 1 blog (Tues. Oct. 6th) and a Jumbo blog (Thurs. Sept. 31) were posted on the wrong blog. Here is the link to them:

http://cog366ref.blogspot.com/search?updated-min=2009-01-01T00%3A00%3A00-08%3A00&updated-max=2010-01-01T00%3A00%3A00-08%3A00&max-results=3

Thursday, October 8, 2009

Numbo Part 2

In the second half of the section on Numbo by Daniel Defays, he talks in detail about how Numbo works in theory followed by a description of some sample runs. As expected, Numbo works very similarly to Jumbo with some small differences. As previously mentioned, it features a Pnet as a sort of knowledge base and factors in randomness more so than Jumbo. Like Jumbo, it features a "cytoplasm" which works very similar for the most part to Numbo. The biggest difference between the two are the backtracking when a block is judged unattractive. Jumbo has a very simple way of backtracking up a pathway one level at a time, whereas Numbo is more volatile in how it removes bricks or destroys blocks altogether, like the 77 block in the first trace run of Numbo.

Overall, I feel that Numbo does a good job of simulating how a human would work one of these problems, or at least as good as it may get. As Defay mentions, Numbo does have some shortcomings in some simple problems, like the final puzzle he gives, but it is mostly due to the fact that you can only load the Pnet with so many proximity targets and facts. It is simply not possible to have a program be as knowledgeable as a capable human when it comes to its "knowledge base". Also as Defay suggests, it is very difficult, if not impossible, to fully describe how a human would solve one of these problems. Whether it is subconscious or just unnoticed, there will always be some steps left out. With that said, the randomness, Pnet, and other facets of the process of how Numbo runs make it as close as it can get to the human mind, at least for now.

Wednesday, September 23, 2009

Jumbo A.I.

In previous reflections, I have disputed the "artificial intelligence" level of Hofstadter's programs, calling them sequential and machine-like in their process. I have also said that writing a program to resemble the way a human would go about these puzzles would be nearly impossible. However, after reading this latest section, Hofstadter goes into detail about how the mechanics behind "Jumbo" work and the steps he took to specifically make this more of an artificial intelligence program rather than a mathematical problem solver. The steps he takes, such as just providing possibilities to the answer of the puzzle rather than the exact answer using a dictionary, make it seem much more plausible that this replicates how any person may go about solving one of these anagrams. Hofstadter also does an excellent job of explaining the process through analogies such as molecules or bonds between humans.

There quality of Jumbo that really sells me as a program representative of humans is that it does not use a dictionary. Hofstadter goes on to say, "that is irrelevant to the mental processes I am attempting to model", when talking about not using a dictionary knowledge base. Instead of being an "expert system", the purpose of Jumbo is to compose words using a building process, whether these words are recognizable or not, is irrelevant. This process of using letters to form clusters which then form "gloms" which eventually end up in a word, I believe, is what truly sells this as a program that represents human skills and processes.

Monday, September 21, 2009

Programmatically Solving Anagrams

In this reading by Hofstadter, he talks about the process by which humans decipher anagrams and recognize words and connects it to a program he wrote in LISP to decipher these decipher these anagrams. Even though he attempts to break down the process by which the human brain, more specifically, he himself turns a group of letters such as TARFD (the example given in one of the illustrations) into a recognizable word, it seems like a rather ambiguous task to write a program to simulate the human mind in this process. This is mostly because there is no real process someone can pinpoint as to how the mind does this.

There are many ideas about different parts of the process of solving an anagram, but I do not believe there is any way Hofstadter or anyone else can pinpoint one specific process as to how it is done. I believe this is because much of this process is done subconsciously and kind of "just happens". Similar to how Hofstadter explains how we recognize the word nights using segments like "ght" and "s". While this may possibly be true, this process seems to be done too much on a subconscious level to truly say concretely that this is how it is done.

Relating back to the program to solve these anagrams, while it is obviously that it can be done, it does not really resemble how the human mind solves an anagram. It is merely just a series of processes that use complex mathematical algorithms to systematically find an answer. The closest way one of these can resemble the human mind, I believe, is merely just taking different ideas of how the mind may go about this process and bunching them all together in a trial and error fashion.