March 30, 2011

Mental Juggling

I was going to post about, the plastic formation the size of Texas in the Pacific known as 'The Great Pacific Garbage Patch," but that will have wait until my next post. This delay happened because I've been reading a fair bit of Steven Pinker and was looking through new comments and found something peculiar, yet it strangely matched up with what I've been reading.

AhmTaGoji was writing about what it would mean for a computer to pass the Turing Test and said that,

"What I was trying to say was that no matter how convincing or uncanny of a performance a computer does during a Turing test is only evidence of the what the test attempts to prove: that a human can be fooled about conscious identity via a computer screen. The computer would have to go much further than to fool one human about his identity, he would have to, for a lack of better words, fool himself. Long after the Turing exercise, if the computer learns from his experiences and starts asking itself questions like 'What am I doing here? How did I get here? I'm human aren't I? Or what am I?', and begins to autonomously prepare himself in the anticipation of future experiences, then he isn't yet conscious."

That's a funny, yet prefect way of putting it, 'The computer would have to go much further than to fool one human about his identity, he would have to, for a lack of better words fool himself.'

Besides the assumption that the computer is a man, I think that's a great understanding, because the more I read about consciousness, the more that fooling ourselves seems to be what we do as well. That act of the mind fooling itself and giving itself a coherent reason for doing the things is a large part of what it is to be 'human' and would also be a large part of what it would take for a lot of people to consider a computer program sentient.

 The Blank Slate, when talking about how the self is just another network of brain systems, on pages 42-44, gets into talking about how the brain tricks itself to give it a reason for doing certain things. This trickery can most dramatically be shown in people who have their corpus callosum cut (the thin layer that connects the two hemispheres of the brain). It can literally be described as cutting the self in two, and each part acts independently while the other half juggles to make sense of what is going on.

The example is given where an experimenter shows the word walk to the right hemisphere (by keeping it in the part of the visual field that only the right hemisphere can see and the person will comply and begin to walk. Admittedly not that interesting yet, but a strange thing happens when the person is asked why he got up and started walking (language is in the persons left hemisphere). The person doesn't say that, "I just got a feeling to" or that "Since my surgery where my corpus callosum was cut I do things without knowing exactly why."

Instead they say, in complete sincerity, that they were going to get a coke. They manufacture a logical reason for what they were doing, where it can be objectively shown that the reason was different.

That's the trickery coming in and Pinker concludes that, "The spooky part is that we have no reason to think that the baloney-generator in the patient's left hemisphere is behaving any differently from outs as we make sense of he inclinations emanating from the rest of our brains. The conscious mind- the self or soul-is a spin doctor, not the commander in chief."

While AhmTaGoji's point may be that for someone to consider a machine to be 'a person' the machine would have to do more than talk, it would have to claim adamantly for itself that it was a person, it also connects very well with the fool inside all of us.

Happy Spinning,
-the moral skeptic

March 17, 2011

A Showcase of Egoism

Thanks for those who commented on my last post. I'm taking a step back from the outside world, and I'm going to talk about a personal pet peeve, two types of statements that get under my skin. I'll try to convey what those statements are, and why they bother me and try propose a solution. Now, to anticipate future criticism, I know these statements are often used unthinkingly, and I don't expect to run into them any less often. I'll probably have settle for just being able to vent and maybe it might make you think the next time you hear one muttered. 

The first type of statement I've often seen talked about before, but I'm going to take it in a different direction than skeptics usually take it.

1. The arrogance of cause and effect - The example I will use could be picked from the hundreds I've doubtlessly heard throughout my years, but one sticks out as it is the one I heard most recently and was the inspiration for this post.

A lady told me that everything happened for a reason, as is often the beginning or end to a wild claim being made, and proceeded to tell me that not having room in a vehicle to take someone on a ride happened because that person would later become sick. Now this seemingly everyday chance occurrence may seem pretty trivial, but to her it was proof of everything happening for a reason. (Now it may look like I'm being ungenerous and blowing her statement's intention out of proportion, which is true in a way, but she makes these types of statements numerous times a day as a proof of life's plan. This was truly how the statement was meant to be interrupted)

Now when most skeptics or atheists look at that kind of statement they charge the person with making the error in assumption that the universe works according to a plan, which is a very good way of dealing with the question if you are talking to someone who wants to look at the comment logically. There is no evidence to the universal planning, or if it is a plan it has been so insidiously created that it doesn't look like a plan at all, good people get hurt and die, people are born with all sort of different ailments and even our existence seems to come from a number of steps built on chance.

As. someone I'm proud to share my name with, Stephen J. Gould would say, “History includes too much chaos, or extremely sensitive dependence on minute and unmeasurable differences in initial conditions, leading to massively divergent outcomes based on tiny and unknowable disparities in starting points. And history includes too much contingency, or shaping of present results by long chains of unpredictable antecedent states, rather than immediate determination by timeless laws of nature. Homo sapiens did not appear on the earth, just a geologic second ago, because evolutionary theory predicts such an outcome based on themes of progress and increasing neural complexity. Humans arose, rather, as a fortuitous and contingent outcome of thousands of linked events, any one of which could have occurred differently and sent history on an alternative pathway that would not have led to consciousness.”

Yet, I don't think that, that type of comment, as intelligible as it is, would do a lick of good for anyone who was willing to make a comment about life having a plan. They have seen the plan in everything around them, so I'd argue with them on there own grounds, and accept their view of a plan.

Even if life does have a plan and everything happens for a reason to fulfill that plan, how could it be that anyone would be so arrogant as to say that, "This is what the divine plan is and this is the absolute reason this particular thing happened!" after all, God is said to work in mysterious ways. I don't think that anyone would stand up and claim that, they indeed know the plan for life, and if they do good for them. They weren't worth arguing with in the first place.

It does avoid the root of the problem, a lack of critical thinking, but it will at least start the person on the path of thinking about how hard it might be to determine the reasons for somethings cause.

2. The self-centered universe - While my first peeve surrounded the issue of someone having the arrogance to claim that they know what I can only describe as 'god's mind', the second issue is one where people clearly don't see the forest for the trees. 

This happens when people say something to the effect of, "Thank god for helping me win", or "Thank god they are alright." Now the second one may seem like a non-issue except, again, for the problem of knowing it was indeed God that saved them, but it has another huge issue when used in times of a tragedy.

Recently, for anyone who is allergic to any news, there was an 9.0 magnitude earthquake that moved Japan two feet lower towards sea level, and expected to have a death toll of more than 10 000. A couple of relatives of mine were actually visiting Japan at the time, which caused a fair bit of anxiety and also a comment that I still regret hearing. It was the comment stated above, "Thank god they are alright."

Now obviously this was good news to receive, but given the situation I think the sentiment could have been far better expressed with different words. To attribute those two individuals safety to God and then not have anything to say about God's designing a world that causes such destruction, or his/her/it's inability to save those other 10 000 people, to which there was no relation, is simply to show a complete disregard to the devastation of others, whether it was intended that way or not.

If God is to bestowed thanks for helping save some people in that type of situation God should also be responsible for the blame of not helping the thousands of others, this isn't a dog that could only drag one family to shore, it's a omnipotent being that created the world after all. To 'Thank God' in that situation is to show a perplexing double standard that I can't even began to understand, and an egoism that I would never want to condone. There seems to be an implicit understanding behind the statement that they were worthy of God's help while the thousands of others weren't.

Now  I know that this is a common term and I often fail to even catch myself from saying 'god dammit', but I do think that even as so it does show both an egoism and ignorance to the plight of others.

I'm in no way endorsing any kind of limit on free speech to solve any problem that slightly bothers someone, but I do think that if those types of statements were really thought about they would be made a heck of a lot less often.

Thanks for reading,
-the moral skeptic

March 2, 2011

Judging Computer Autonomy

Well I can say that I was satisfied with the content of my last post, but I did let a couple of issues slide past that I would have liked to address. I really let Ray have his way due to the time constraint of having to go to work. Damn work requiring its workers to be there at certain times and damn money needed to be gambled away.

Anyway, Ray Kurzweil has been criticized for making the statement that all technologies are progressing at a exponential rate rather than a linear rate, which leads people to overestimate the progress in the short term and grossly underestimate progress in the long term.

Yet, there is room for criticism because not all technologies are moving at an exponential rate. The progress of curing different cancers, gas mileage for cars, and the comfort of computer chairs hasn't progressed as the speed for decoding the human genome has.

So the criticism is semi-warranted because Kurzweil does overstate his case a little bit, but does it really matter? I don't think so. If only one technology continues to improve at the exponential rate that it has historically been on than the criticism is really just window dressing distracting for the implications that are fun to think about. That technology is, of course, the computer.

So long as the progress of computers is exponential (which it is) the implications will be vast enough to make Kurzweil's predictions for the most part correct. Looking at what 'Watson' has done a couple of weeks ago in understanding human language and being able to answer questions it is impossible not to look into the future and think about the possibilities, some of which were mentioned in my last post.

As Yogi Berra it attributed to saying, 'It is hard to make predictions, especially about the future', but Kurzweil has a good track record and there is really no harm done in being wrong. The people who foresaw the flying car never broke into your house and peed on your rug.

Apart from the flying car, one prediction seems to stand out from the rest, when will humans build something that can be described as having a self? When will something be built and said to be conscious? Well that's a tricky question to answer, for a number of different reasons.

The standard test for telling if an animal has the concept of self is the mirror test, which involves putting an animal in front of a mirror with a dot on their body that they could only view in the mirror. The great apes, dolphins, orcas, elephants and a couple species of birds have all passed the mirror test. To pass the test the dot has to be noticed in the mirror and connected to the body of the thing in front of the mirror, this shows that there is an understanding that, that someone in the mirror is 'me' and they are curious why there is a dot on my back. For control there is also a dot placed in an area of the animal that is out of site when the thing is looking in the mirror, so that it can limit coincidental actions.

This test has be used to show that babies up to 18 months, dogs and cats don't react to the dot in any way and therefore lack the concept of the self, which can be noticed by anyone who has seen very young or handicapped children play 'with the boy/girl in the mirror.'

Well this test wouldn't work at all for computers, because they could be specifically made to pass the mirror test and respond to a dot put on them. It really breaks our standards to think about how to test for consciousness in a machine, which is why people look to the Turing Test as the Mirror test for computers. The test involves being having two chat boxes one being another human and one being a computer program and a person talking to both. The person  then has to decide which one is a program and which is a computer, when the two are indistinguishable the computer has passed the test.

But not all people feel that the Turing Test demonstrates much, a few philosophers like John Searle have go so far to say that the Turing test proves nothing about consciousness, or in effect that a talking tree could never be seen as conscious (I don't remember the name of who argued that). I'm not in their line of thinking, but I do feel that some part of the self involves having goals and motivations and a chat program while a program can say it has those things, it lacks the ability to act them out. This is not to say that something that passes the Turing Test isn't conscious, just that I'm not sure that I would be ready to call it conscious yet.

Another problem is that the Turing test isn't looking for consciousness, it's looking for human consciousness, its by definition looking for something that makes human errors and talks like another person would. It's telling when Kurzweil says that the computer will have to dumb itself down to pass the test, because it's not an ideal test of consciousness, and there lies the rub; Nothing is. If the program was elegant enough to throw in a few spelling mistakes, vulgarities and misconceptions it could probably make someone think it was yours truly, but it would have used parlor tricks to do so. 

The mirror test doesn't work for programs and the Turing test only works to find something compatible with human understandings, and could be prone to programs taking advantage by making mistakes that people wouldn't think that "computers" would make.

In fact, the whole topic might be a moot point. People subjectively deciding what is and isn't conscious seems like a huge pill of worms. If this surly scientifically done poll done on Just Labradors Forum, with the appropriate measures to account for bias, is correct than 78% of people believe that dogs are self-aware, 21% are undecided and 0% think that they definitely not self aware.  People are biased into thinking that biological things have consciousness and that machines don't, and the first conscious machines will be abused due to that fact.

When a machine passes the Turing Test and shows reasonable signs that it is conscious, it would be ethical to just treat it as such seeing how the criteria is so poorly defined and no test seems ideal.

Thanks for reading,