With the introduction of technology, machines have been able to better and better achieve tasks that we felt before were uniquely human. Assembly lines have replaced jobs previously held by day laborers, the electronic calculator has rendered the occupation which once shared the same name obsolete, and even human body parts which we once felt were unable to be replicated now have prosthetic doppelgangers. It is then in this pattern of technology emulating human behavior that the computer science community hurtles towards its next challenge: artificial intelligence. The idea of a sentient but artificial being is a concept that not only science has embraced, but it is also a controversial concept who’s social, political, and cultural ramifications have been explored through many forms of popular media. The idea that this technology is imminent and obtainable is one which many technological prophets feel, yet there are just as many who counter that this technology is impossible. While perhaps this concert of artificial intelligence is possible, it is much more likely that we are far from it if we can obtain it at all as the physical technological barriers that computers face are far too great. Also, assuming that this technology is somehow achieved, the predictions of its influence on our world range from being a great social enhancement to total species annihilation at the hands of our own creation. The most feasible outcome is probably not at either extreme, but a more middle ground where this technology helps us but is still controlled from reaching its full potential.
To truly understand the complexities of artificial intelligence, one must first understand the question, just what exactly constitutes the definition of AI? Because the question of intelligence can be subjective, a standardized test has been implemented in the field of computer science to set a bar at which we as humans determine whether or not a mechanical being is intelligent. This analysis is aptly named the Turing Test after the man who first came up with this experiment in 1950. Turing proposed the idea where a series of computers and human decoys have a text based terminal conversation with a number of judges, who must therefore decide whether or not the entity they are conversing with is human or not. “Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted”” (Christian). The consensus is that if a being is able to fool someone into being human for at least five minutes, then it must be intelligent.
But, if a machine does indeed pass this test, which one has yet to do, does this immediately grant it certification as being intelligent? Well, what do we as a populace define intelligence to be as? According to the Oxford English Dictionary, intelligence is defined as “the faculty of understanding; intellect” (Intelligence, N.). Just because this computer is able to imitate an intelligent human conversation, does it really understand what it is saying? If the user inputs, “You’re funny,” does the machine take this with everything else that this statement implies? The resulting output doesn’t take into account the compliment or sarcasm the statement may entail. Even a response of “haha” is nothing more than a predefined output inserted from the programmer. Even programs which are supposed to learn, such as CleverBot, simply process input in a rigidly defined way. The level of pattern matching is the basis for all artificial intelligence, even ones that follow a logic system such as chess bots. “Yes, a computer can play chess, but no, a computer can’t tell the pawn from the king. That’s the hard problem. You give him some Marcel Duchamp chess set where everybody else can tell what the pieces are, he can’t tell” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2).
While these issues and perceptions of intelligence may seem like recent phenomena, AI has its roots in scientific pursuits all the way back to 5th Century B.C. where “Aristotle invent syllogistic login, the first formal deductive reasoning system” (AITopics / BriefHistory). It has moved then into the supposed created talking heads of Bacon in the 13th Century to the proposal in the 16th Century by Descartes that “the bodies of animals are nothing more than complex machines” (AITopics / BriefHistory). Science for a long time has been fascinated with this concept of AI, although it has been more pronounced in recent years. This may be surprising to some, as “for a long time, people argued well it gets you into philosophy, its gets you into religion or metaphysics: things that science by definition cannot deal very well with. And science wants to deal with concrete things, […] and consciousness in general does not seem to have that character. But since it’s part of the natural world, […] therefore if […] we want to have a complete and perfect description of the world we cannot skirt around the issue of consciousness” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2).
Yet, even with the focus and rapid technological development in recent years, we are still far from AI. The reason that we cannot get beyond more and more efficient and complex pattern matching algorithms lies in the very way that computers operate. The largest and most glaring barrier against AI and why it is most likely not even possible is the failure to prove the P vs. NP problem, or the most important unsolved problem in computer science. Computers process information in a series of algorithms rather than how our brains uniquely process information, and it is in this difference where we experience issues. “The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic algorithm in polynomial time” (Cook). An algorithm is accepted within the confines of P if is it is quickly solvable, while an algorithm is NP if it is quickly verifiable.
This concept may seem abstract and confusing at first glance, but it can be simplified through the following example. Suppose that we desired an algorithm that wanted to output a set of numbers whose sum is equal to zero. This example would fall into the classification of NP. This is because while we can easily determine whether or not a given set is equal to zero ({1,2,-1,-2} -> (1+2-1-2 = 0)), we cannot possible return every single possible combination in a given set of time due to the set of real numbers being infinite. Therefore the algorithm cannot run in fixed polynomial time, but rather a nondeterministic one which makes it fall into NP. Consequently, the Euclidian algorithm for finding the greatest common devisor of two integers, which is one of the world’s oldest algorithms, runs in polynomial time due to the structure of how it determines its solutions and falls within the confines of P.
P vs. NP is important because of the nature of the algorithms that would be necessary for an artificial intelligence to occur. The algorithms that would replicate our thoughts and cognitive thinking would fall within the realms of NP, so in order to allow them to function at all we would have to prove that P = NP. “P = NP means that for every problem that has an efficiently verifiable solution, we can find that solution efficiently as well” (Fortnow). A solution to these problems would have to be found, which nobody has currently been able to do. Our limitations pertaining to the achievement of this technology aren’t just the physical material limitations of circuits or not having fast enough processors. These are both problems which can easily be solved under Moore’s law which “states that the number of transistors on a chip will double about every two years” (Moore's Law: Made Real by Intel Innovations). This issue deals with the very nature of computers themselves and a problem that will need to be solved before any real progress can be made.
Yet, many are still confident about the impending presence of artificial intelligence. Futurist Ray Kurzweil who is one of the most reputable theorists about the involvement of technology does believe this is possible. Not only does he view that this is feasible, but he views our current rate of technological growth to indicate that this will happen very soon. “In 2029, nanobots will circulate through the bloodstream of the human body to diagnose illnesses” (Kurzweil 307). However, this is technology that doesn’t require the complexities of AI. Kurzweil also asserts that “by 2029 […] we will have completed the reverse engineering of the human brain” (Ray Kurzweil Explains the Coming Singularity). He supports this with the fact that we have already engineered several sections of our brains already, including the cerebral, audio, and visual cortexes. He claims that this will provide us with all the algorithms needed to simulate human intelligence. Whether or not this will conflict with the P vs. NP thesis is yet to be seen.
It is perhaps this perceived looming of AI that is the cause for its influence upon popular media and culture with their perception and prediction of a future where AI is an everyday technology. It seems that instead of embracing this possible technology and striving for it, we have taken it upon ourselves to warn us about the dangers of an AI if it ever were to surface. Numerous pieces of media warn of a dystopian future or a world where the world of man has been completely destroyed by the machines. This is seen in the acclaimed film, The Matrix, where the character of Morpheus explains how the world that humanity now resides in is simply a computer program controlled by the AI who have destroyed the world of humanity and forced us to enslavement. The fear of not only destruction but also the ramifications of creating life and the idea of “playing God” are also reflected as far back as the early 1800's in the iconic book Frankenstein. Here, the monster that is created goes on a murderous rampage against those that created him. He notes, “The nearer I approached to your habitation, the more deeply did I feel the spirit of revenge enkindled in my heart” (Shelly 124).
There are still other futurists who warmly embrace this coming of AI. These individuals note the good that these machines could do and how they could make all of human life easier by continuing the pattern of technology replacing humans in various tasks. They cite the notion of the singularity. “Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds” (What Is the Singularity?). This process would occur almost instantaneously after the computer reaches that initial threshold, creating a machine that will far exceed the limits of the human mind. It can achieve things that we never could physically do, and therefore can make improvements that we could not consider or achieve.
However, our preconceived notions about AI will leave us in a future that is somewhere between these utopias and dystopias. Because of the influence of mass media, we as a people are obviously cautious about the power that an AI could have. If an intelligence was ever even close to being physically achieved, it is more than likely that limits and safety mechanosms will be hardwired in so that we are still in control of the influence of the machine. This should prevent the disaster seen in movies such as The Matrix. However, these limits would also hinder the potential of AI, making it unable to grow and have the ability dreamed of by the positive futurists. By putting any restraints on its intelligence, the idea of the singularity becomes the impossible. While the novel by William Gibson, Neuromancer, does indeed portray a dystopian technological future, its premise of AI trapped within the confines set up for them by humanity may indeed be accurate if the notion of AI was ever to indeed come to fruition.
As we progress to a more and more technological society, the question of AI will become a bigger and bigger issue. While currently artificial intelligence is simply not possible due to technological limitation and P vs. NP, there may be ways around this and unknown solutions to be discovered. While this is unlikely and a true artificial sentient being will more than likely will never come into being on our world, it is unknown how we as a society will handle it. The issue is that we have never dealt with another form of life other than our own before. How will we treat it? If we consider the discovery of ancient cultures and societies of each other, one of two options will emerge. We will either live in harmonious unity with our created life, or like European settlers who instead chose to destroy the natives, one form of life may have to make room for the other.
Works Cited
"AITopics / BriefHistory." Association for the Advancement of Artificial Intelligence. Web. 16 Mar. 2011. <http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/BriefHistory>.
Christian, Brian. "Mind vs. Machine." The Atlantic (2011). Web. 27 Mar. 2011. <http://www.theatlantic.com/magazine/archive/1969/12/mind-vs-machine/8386/>.
Cook, Stephen. "The P versus NP Problem." The Clay Mathematics Institute. Web. 16 Mar. 2011. <http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf>.
Fortnow, Lance. "The Status of the P versus NP Problem." The University of Chicago. Web. 16 Nov. 2010. <http://people.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=Q2JD5xg6weE>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=QN1l5e1yamU>.
"Intelligence, N." Oxford English Dictionary. Oxford University Press, Nov. 2010. Web. 28 Mar. 2011. <http://tinyurl.com/6dbxo42>.
Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking, 1999. Print.
“Moore's Law: Made Real by Intel Innovations." Intel. Web. 28 Mar. 2011. <http://www.intel.com/technology/mooreslaw/>.
"Ray Kurzweil Explains the Coming Singularity." YouTube. 28 Apr. 2009. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=1uIzS1uCOcE>.
Shelley, Mary W., and Johanna M. Smith. Frankenstein. Boston [etc.: Bedford-St. Martin's, 2000. Print.
"What Is the Singularity?" Singularity Institute for Artificial Intelligence. Web. 28 Mar. 2011. <http://singinst.org/overview/whatisthesingularity>.
I must admit, I don't think I understand the difference between P and NP. I'm familiar with the Turing Test, but not with these concepts. However, I don't think these concepts are going to signify the birth of AI, because they are still human concepts.
ReplyDeleteI can most easily explain my point by discussing how we perceive animals, which you mention are thought of as complex machines. We've devalued the agency of the animal, mistakenly, I think, because we've subjugated it to our tests, the tests humans created. So, according to our standards, animals can't think independently. Because we value our thought, we think this makes us better than animals.
Franz Kafka gives an example of this thought processing. Discussing an experiment with a monkey, the monkey is put in a cage and observed when his food, bananas, are placed out of reach. The researchers are measuring the animal's intelligence based on if he can figure out how to get to the bananas. But the monkey, in Kafka's mind, can be thinking about far more complex issues. Why have these men locked me up? Why have they placed the bananas out of reach? But instead of measuring these more complex thoughts, the researchers measure only the most basic form of intelligence, and demean the monkey to these tests. Eventually, out of a physical need, the monkey must comply.
At this point I'm sure you thinking why is he talking about monkeys? I mention Kafka in the same way J.M. Coetzee mentions the example - to allude to a larger point. We are measuring intelligence and consciousness based on our own standards, in the same way we are measuring life in the terms we have prescribed. I don't think A.I. will look anything like we imagine it to. It won't fit into a neat test, or by qualified by algorithms. It will be its own entity, and we might not consider it as conscious or as life until it has developed into something unmanageable, as in the sci-fi stories.
Hopefully that entity doesn't decide to harvest us for power. Or perhaps it already has...