Showing posts with label Consciousness. Show all posts
Showing posts with label Consciousness. Show all posts

March 30, 2011

Mental Juggling


I was going to post about, the plastic formation the size of Texas in the Pacific known as 'The Great Pacific Garbage Patch," but that will have wait until my next post. This delay happened because I've been reading a fair bit of Steven Pinker and was looking through new comments and found something peculiar, yet it strangely matched up with what I've been reading.

AhmTaGoji was writing about what it would mean for a computer to pass the Turing Test and said that,

"What I was trying to say was that no matter how convincing or uncanny of a performance a computer does during a Turing test is only evidence of the what the test attempts to prove: that a human can be fooled about conscious identity via a computer screen. The computer would have to go much further than to fool one human about his identity, he would have to, for a lack of better words, fool himself. Long after the Turing exercise, if the computer learns from his experiences and starts asking itself questions like 'What am I doing here? How did I get here? I'm human aren't I? Or what am I?', and begins to autonomously prepare himself in the anticipation of future experiences, then he isn't yet conscious."

That's a funny, yet prefect way of putting it, 'The computer would have to go much further than to fool one human about his identity, he would have to, for a lack of better words fool himself.'

Besides the assumption that the computer is a man, I think that's a great understanding, because the more I read about consciousness, the more that fooling ourselves seems to be what we do as well. That act of the mind fooling itself and giving itself a coherent reason for doing the things is a large part of what it is to be 'human' and would also be a large part of what it would take for a lot of people to consider a computer program sentient.

 The Blank Slate, when talking about how the self is just another network of brain systems, on pages 42-44, gets into talking about how the brain tricks itself to give it a reason for doing certain things. This trickery can most dramatically be shown in people who have their corpus callosum cut (the thin layer that connects the two hemispheres of the brain). It can literally be described as cutting the self in two, and each part acts independently while the other half juggles to make sense of what is going on.

The example is given where an experimenter shows the word walk to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see and the person will comply and begin to walk. Admittedly not that interesting yet, but a strange thing happens when the person is asked why he got up and started walking (language is in the persons left hemisphere). The person doesn't say that, "I just got a feeling to" or that "Since my surgery where my corpus callosum was cut I do things without knowing exactly why."

Instead they say, in complete sincerity, that they were going to get a coke. They manufacture a logical reason for what they were doing, where it can be objectively shown that the reason was different.

That's the trickery coming in and Pinker concludes that, "The spooky part is that we have no reason to think that the baloney-generator in the patient's left hemisphere is behaving any differently from outs as we make sense of he inclinations emanating from the rest of our brains. The conscious mind- the self or soul-is a spin doctor, not the commander in chief."

While AhmTaGoji's point may be that for someone to consider a machine to be 'a person' the machine would have to do more than talk, it would have to claim adamantly for itself that it was a person, it also connects very well with the fool inside all of us.

Happy Spinning,
-the moral skeptic

March 2, 2011

Judging Computer Autonomy


Well I can say that I was satisfied with the content of my last post, but I did let a couple of issues slide past that I would have liked to address. I really let Ray have his way due to the time constraint of having to go to work. Damn work requiring its workers to be there at certain times and damn money needed to be gambled away.

Anyway, Ray Kurzweil has been criticized for making the statement that all technologies are progressing at a exponential rate rather than a linear rate, which leads people to overestimate the progress in the short term and grossly underestimate progress in the long term.

Yet, there is room for criticism because not all technologies are moving at an exponential rate. The progress of curing different cancers, gas mileage for cars, and the comfort of computer chairs hasn't progressed as the speed for decoding the human genome has.


So the criticism is semi-warranted because Kurzweil does overstate his case a little bit, but does it really matter? I don't think so. If only one technology continues to improve at the exponential rate that it has historically been on than the criticism is really just window dressing distracting for the implications that are fun to think about. That technology is, of course, the computer.

So long as the progress of computers is exponential (which it is) the implications will be vast enough to make Kurzweil's predictions for the most part correct. Looking at what 'Watson' has done a couple of weeks ago in understanding human language and being able to answer questions it is impossible not to look into the future and think about the possibilities, some of which were mentioned in my last post.

As Yogi Berra it attributed to saying, 'It is hard to make predictions, especially about the future', but Kurzweil has a good track record and there is really no harm done in being wrong. The people who foresaw the flying car never broke into your house and peed on your rug.

Apart from the flying car, one prediction seems to stand out from the rest, when will humans build something that can be described as having a self? When will something be built and said to be conscious? Well that's a tricky question to answer, for a number of different reasons.


The standard test for telling if an animal has the concept of self is the mirror test, which involves putting an animal in front of a mirror with a dot on their body that they could only view in the mirror. The great apes, dolphins, orcas, elephants and a couple species of birds have all passed the mirror test. To pass the test the dot has to be noticed in the mirror and connected to the body of the thing in front of the mirror, this shows that there is an understanding that, that someone in the mirror is 'me' and they are curious why there is a dot on my back. For control there is also a dot placed in an area of the animal that is out of site when the thing is looking in the mirror, so that it can limit coincidental actions.

This test has be used to show that babies up to 18 months, dogs and cats don't react to the dot in any way and therefore lack the concept of the self, which can be noticed by anyone who has seen very young or handicapped children play 'with the boy/girl in the mirror.'

Well this test wouldn't work at all for computers, because they could be specifically made to pass the mirror test and respond to a dot put on them. It really breaks our standards to think about how to test for consciousness in a machine, which is why people look to the Turing Test as the Mirror test for computers. The test involves being having two chat boxes one being another human and one being a computer program and a person talking to both. The person  then has to decide which one is a program and which is a computer, when the two are indistinguishable the computer has passed the test.

But not all people feel that the Turing Test demonstrates much, a few philosophers like John Searle have go so far to say that the Turing test proves nothing about consciousness, or in effect that a talking tree could never be seen as conscious (I don't remember the name of who argued that). I'm not in their line of thinking, but I do feel that some part of the self involves having goals and motivations and a chat program while a program can say it has those things, it lacks the ability to act them out. This is not to say that something that passes the Turing Test isn't conscious, just that I'm not sure that I would be ready to call it conscious yet.

Another problem is that the Turing test isn't looking for consciousness, it's looking for human consciousness, its by definition looking for something that makes human errors and talks like another person would. It's telling when Kurzweil says that the computer will have to dumb itself down to pass the test, because it's not an ideal test of consciousness, and there lies the rub; Nothing is. If the program was elegant enough to throw in a few spelling mistakes, vulgarities and misconceptions it could probably make someone think it was yours truly, but it would have used parlor tricks to do so. 

The mirror test doesn't work for programs and the Turing test only works to find something compatible with human understandings, and could be prone to programs taking advantage by making mistakes that people wouldn't think that "computers" would make.

In fact, the whole topic might be a moot point. People subjectively deciding what is and isn't conscious seems like a huge pill of worms. If this surly scientifically done poll done on Just Labradors Forum, with the appropriate measures to account for bias, is correct than 78% of people believe that dogs are self-aware, 21% are undecided and 0% think that they definitely not self aware.  People are biased into thinking that biological things have consciousness and that machines don't, and the first conscious machines will be abused due to that fact.

When a machine passes the Turing Test and shows reasonable signs that it is conscious, it would be ethical to just treat it as such seeing how the criteria is so poorly defined and no test seems ideal.



Thanks for reading,
-themoralskeptic