With the introduction of technology, machines have been able to better and better achieve tasks that we felt before were uniquely human. Assembly lines have replaced jobs previously held by day laborers, the electronic calculator has rendered the occupation which once shared the same name obsolete, and even human body parts which we once felt were unable to be replicated now have prosthetic doppelgangers. It is then in this pattern of technology emulating human behavior that the computer science community hurtles towards its next challenge: artificial intelligence. The idea of a sentient but artificial being is a concept that science has not only embraced, but it is also a controversial concept with social, political, and cultural ramifications that have been explored through many forms of popular media. The idea that this technology is imminent and obtainable is one which many technological prophets feel, yet there are just as many who counter that this technology is impossible. While perhaps this concert of artificial intelligence is possible, it is much more likely that we are far from it, if we can obtain it at all as the physical technological barriers that computers face are far too great. Also, assuming that this technology is somehow achieved, the predictions of its influence on our world range from being a great social enhancement to total species annihilation at the hands of our own creation. The most feasible outcome is probably not at either extreme, but a more middle ground where this technology helps us but is still controlled from reaching its full potential.
![]() |
Statue of Alan Turing, the man that came up with the Turing Test |
To truly understand the complexities of artificial intelligence, one must first understand the question, just what exactly constitutes the definition of AI? Because the question of intelligence can be subjective, a standardized test has been implemented in the field of computer science to set a bar at which we as humans determine whether or not a mechanical being is intelligent. This analysis is aptly named the Turing Test after the man who first came up with this experiment in 1950. Turing proposed the idea where a series of computers and human decoys have a text based terminal conversation with a number of judges, who must then decide whether or not the entity they are conversing with is human. “Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted”” (Christian). The consensus is that if a being is able to fool someone into being human for at least five minutes, then it must be intelligent.
But, if a machine does indeed pass this test, which one has yet to do, does this immediately grant it certification as being intelligent? Well, what do we as a populace define intelligence to be as? According to the Oxford English Dictionary, intelligence is defined as “the faculty of understanding; intellect” (Intelligence, N.). Just because this computer is able to imitate an intelligent human conversation, does it really understand what it is saying? If the user inputs, “You’re funny,” does the machine take this with everything else that this statement implies? The resulting output doesn’t take into account the compliment or sarcasm the statement may entail. Even a response of “haha” is nothing more than a predefined output inserted from the programmer. Even programs which are supposed to learn, such as Cleverbot, simply process input in a rigidly defined way. The level of pattern matching is the basis for all artificial intelligence, even ones that follow a logic system such as chess bots. “Yes, a computer can play chess, but no, a computer can’t tell the pawn from the king. That’s the hard problem. You give him some Marcel Duchamp chess set where everybody else can tell what the pieces are, he can’t tell” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2). A computer cannot reason in this instance that even though the pieces look different, the way it should interact with them should still be the same. It needs to be explicitly told this, while a human can reason that although the pawns may look different, they still have the same function.
![]() |
An example of a Marcel Duchamp chess set. Note the abstract shaped pieces. |
While these issues and perceptions of intelligence may seem like recent phenomena, AI has its roots in scientific pursuits all the way back to 5th Century B.C. where “Aristotle invent[ed] syllogistic login, the first formal deductive reasoning system” (AITopics / BriefHistory). It has moved then into the supposed created talking heads of Bacon in the 13th Century to the proposal in the 16th Century by Descartes that “the bodies of animals are nothing more than complex machines” (AITopics / BriefHistory). Science for a long time has been fascinated with this concept of AI, although it has been more pronounced in recent years. This may be surprising to some, as,
While the interaction of the human brain seems to be illogical at times, scientists still believe that there must be some logical and structural reason of how our brain works. Scientists seek to prove that fact that even our irrationality has its basis in rational thinking, even if it occurs in our unconsciousness.For a long time, people argued well it gets you into philosophy, its gets you into religion or metaphysics: things that science by definition cannot deal very well with. And science wants to deal with concrete things, […] and consciousness in general does not seem to have that character. But since it’s part of the natural world, […] therefore if […] we want to have a complete and perfect description of the world we cannot skirt around the issue of consciousness (The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2).
Yet, even with the focus and rapid technological development in recent years, we are still far from AI. The reason that we cannot get beyond more and more efficient and complex pattern matching algorithms lies in the very way that computers operate. The largest and most glaring barrier against AI and why it is most likely not even possible is the failure to prove the P vs. NP problem, or the most important unsolved problem in computer science. Computers process information in a series of algorithms rather than how our brains uniquely process information, and it is in this difference where we experience issues. “The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic algorithm in polynomial time” (Cook). An algorithm is accepted within the confines of P if is it is quickly solvable, while an algorithm is NP if it is quickly verifiable.
This concept may seem abstract and confusing at first glance, but it can be simplified through the following example. Suppose that we desired an algorithm that wanted to output a set of numbers whose sum is equal to zero. This example would fall into the classification of NP. This is because while we can easily determine whether or not a given set is equal to zero ({1,2,-1,-2} -> (1+2-1-2 = 0)), we cannot possible return every single possible combination in a given set of time due to the set of real numbers being infinite. Therefore the algorithm cannot run in fixed polynomial time, but rather a nondeterministic one which makes it fall into NP. Consequently, the Euclidian algorithm for finding the greatest common devisor of two integers, which is one of the world’s oldest algorithms, runs in polynomial time due to the structure of how it determines its solutions and falls within the confines of P.
![]() |
An XKCD Comic depicting The Traveling Salesman problem, one of many which fall under the classification of NP. |
P vs. NP is important because of the nature of the algorithms that would be necessary for an artificial intelligence to occur. The algorithms that would replicate our thoughts and cognitive thinking would fall within the realms of NP, so in order to allow them to function at all we would have to prove that P = NP. “P = NP means that for every problem that has an efficiently verifiable solution, we can find that solution efficiently as well” (Fortnow). A solution to these problems would have to be found, which nobody has currently been able to do. Our limitations pertaining to the achievement of this technology aren’t just the physical material limitations of circuits or not having fast enough processors. These are both problems which can easily be solved under Moore’s law which “states that the number of transistors on a chip will double about every two years” (Moore's Law: Made Real by Intel Innovations). This issue deals with the very nature of computers themselves and a problem that will need to be solved before any real progress can be made.
To understand why this is vital to artificial intelligence, let’s look at both how a computer and a human would seek to solve a propositional logic problem. For those unfamiliar with propositional logic, it is “a kind of logic in which the fundamental components are whole statements or propositions” (Hurley 680). A problem is given with a set of axioms and a conclusion that the individual must reach. In order to prove the given statement, the user is allowed to use a set of rules of implication and rules of replacement, methods in order to rewrite a fact or deduce a new one. When a computer is presented which such a problem, the method in which it attempts to solve this is a process we in our research have dubbed as “forward chaining”. The computer simply will run through every single possible permutation of these axioms and instance rules, trying combination after combination until it can find a solution. While some development has been made to make this process more efficient, it is still an algorithm that runs in NP time. However, when a human is presented with such a problem, we don’t go through a similar process. Students in logic courses solve problems like this all the time (relatively) quickly. That is because we as humans have some kind of notion, some kind of intuition of how to solve this problem. Therefore, our “algorithm” to solve this problem falls under the classification of P. The issue here is getting the computer to run algorithms like this, but we have yet to be able to program intuition into a silicon chip.
![]() |
Futurist Ray Kurzweil |
Yet, many are still confident about the impending presence of artificial intelligence. Futurist Ray Kurzweil who is one of the most reputable theorists about the involvement of technology does believe this is possible. Not only does he view that this is feasible, but he views our current rate of technological growth to indicate that this will happen very soon. “In 2029, nanobots will circulate through the bloodstream of the human body to diagnose illnesses” (Kurzweil 307). However, this is technology that doesn’t require the complexities of AI. Kurzweil also asserts that “by 2029 […] we will have completed the reverse engineering of the human brain” (Ray Kurzweil Explains the Coming Singularity). He supports this with the fact that we have already engineered several sections of our brains already, including the cerebral, audio, and visual cortexes. He claims that this will provide us with all the algorithms needed to simulate human intelligence. Whether or not this will conflict with the P vs. NP thesis is yet to be seen, as just because we understand the structure of the brain doesn’t necessarily mean that we can use knowledge to develop the necessary algorithms.
It is perhaps this perceived looming of AI that is the cause for its influence upon popular media and culture with their perception and prediction of a future where AI is an everyday technology. It seems that instead of embracing this possible technology and striving for it, we have taken it upon ourselves to warn us about the dangers of an AI if it ever were to surface. Numerous pieces of media warn of a dystopian future or a world where the world of man has been completely destroyed by the machines. This is seen in the acclaimed film The Matrix, where the character of Morpheus explains how the world that humanity now resides in is simply a computer program controlled by the AI who have destroyed the world of humanity and forced us into enslavement. The fear of not only destruction but also the ramifications of creating life and the idea of “playing God” are also reflected as far back as the early 1800's in the iconic book Frankenstein. Here, the monster that is created goes on a murderous rampage against those that created him. He notes, “The nearer I approached to your habitation, the more deeply did I feel the spirit of revenge enkindled in my heart” (Shelly 124). It is more popular in modern culture to have the advancements possess a connotation of destruction and demise, as some speculate may be keeping with human nature. People are often perceived as being afraid of something they don’t understand or comprehend, whether it be other cultures, the supernatural, or even the dark. Since we still don’t have any true idea of how this technology will develop, it may be only natural that we have this feeling associated with AI as well.
![]() |
Scene from the movie The Matrix |
There are still other futurists who warmly embrace this coming of AI. These individuals note the good that these machines could do and how they could make all of human life easier by continuing the pattern of technology replacing humans in various tasks. They cite the notion of the singularity. “Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds” (What Is the Singularity?). This process would occur almost instantaneously after the computer reaches that initial threshold, creating a machine that will far exceed the limits of the human mind. It can achieve things that we never could physically do, and therefore can make improvements that we could not consider or achieve.
However, our preconceived notions about AI will leave us in a future that is somewhere between these utopias and dystopias. Because of the influence of mass media, we as a people are obviously cautious about the power that an AI could have. If an intelligence was ever even close to being physically achieved, it is more than likely that limits and safety mechanisms will be hardwired in so that we are still in control of the influence of the machine. This should prevent the disaster seen in movies such as The Matrix. However, these limits would also hinder the potential of AI, making it unable to grow and have the ability dreamed of by the positive futurists. By putting any restraints on its intelligence, the idea of the singularity becomes the impossible. While the novel by William Gibson, Neuromancer, does indeed portray a dystopian technological future, its premise of AI trapped within the confines set up for them by humanity may indeed be accurate if the notion of AI was ever to indeed come to fruition.
As we progress to a more and more technological society, the question of AI will become a bigger and bigger issue. While currently artificial intelligence is simply not possible due to technological limitation and P vs. NP, there may be ways around this and unknown solutions to be discovered. While this is unlikely and a true artificial sentient being will more than likely will never come into being on our world, it is unknown how we as a society will handle it. The issue is that we have never dealt with another form of life other than our own before. How will we treat it? If we consider the discovery of ancient cultures and societies of each other, one of two options will emerge. We will either live in harmonious unity with our created life, or like European settlers who instead chose to destroy the natives, one form of life may have to make room for the other.
Works Cited
"AITopics / BriefHistory." Association for the Advancement of Artificial Intelligence. Web. 16 Mar. 2011. <http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/BriefHistory>.
Christian, Brian. "Mind vs. Machine." The Atlantic (2011). Web. 27 Mar. 2011. <http://www.theatlantic.com/magazine/archive/1969/12/mind-vs-machine/8386/>.
Cook, Stephen. "The P versus NP Problem." The Clay Mathematics Institute. Web. 16 Mar. 2011. <http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf>.
Fortnow, Lance. "The Status of the P versus NP Problem." The University of Chicago. Web. 16 Nov. 2010. <http://people.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=Q2JD5xg6weE>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=QN1l5e1yamU>.
Hurley, Patrick J. A Concise Introduction to Logic. 10th ed. Belmont: Thomas Wadsworth, 2008. Print.
"Intelligence, N." Oxford English Dictionary. Oxford University Press, Nov. 2010. Web. 28 Mar. 2011. <http://tinyurl.com/6dbxo42>.
Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking, 1999. Print.
“Moore's Law: Made Real by Intel Innovations." Intel. Web. 28 Mar. 2011. <http://www.intel.com/technology/mooreslaw/>.
"Ray Kurzweil Explains the Coming Singularity." YouTube. 28 Apr. 2009. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=1uIzS1uCOcE>.
Shelley, Mary W., and Johanna M. Smith. Frankenstein. Boston [etc.: Bedford-St. Martin's, 2000. Print.
"What Is the Singularity?" Singularity Institute for Artificial Intelligence. Web. 28 Mar. 2011. <http://singinst.org/overview/whatisthesingularity>.
No comments:
Post a Comment