{

Tuesday, April 19, 2011

Experience Summation and Course Evaluation

Sitting on the table area outside of the library, I have a good view of a large part of campus.  The lake stretches out before me, the light reflecting off the surface of the water a reminder that is was a good idea to wear the sunglasses perched upon my nose.  The commons stretches on to the left of me, and I can make out the dining hall and perhaps a hint of Gottwald. The geese which have taken it upon themselves to annex the closer side of the lake occasionally anger the passing motorist who has to yield to their slow waddle across the tarmac.  One could wonder how a virtual world could recreate this.
And yet, when we look at game technology, we realize that this world isn’t far from being perfectly replicated in stunning 1080i.  Games such as the upcoming Battlefield 3 already are visually breathtaking.  True, virtual images of mountainous terrain will never be as stunning as the real thing.  But we will get very, very close.  Even the academic brick architectural theme of this campus matches a distinct visual style that certain games are known for.  While virtual world may not actively engage all five senses, the ones they do choose to stimulate are strikingly realistic.
And yet, there is a key difference between these two environments that I noticed:  the behavior of the residents of this campus are nothing like those that one may find in WoW.  Looking around at those around me, some were alone, many traveled in groups, yet the social boundaries that people had regarding their interaction with each other were set.  People didn’t suddenly join up and create new groups ad hoc like the norm in virtual worlds.  Rather, we have substituted long term relationships and interactions rather than the fleeting ones of cyberspace.  Plus, nobody ran around the campus and insulted each other.  The trolls depicted in books such as Castronova’s Exodus to the Virtual World were seemingly absent as people had to be accountable for their actions in this campus’s social culture.  Actions seen by the lake today would be remembered by everyone tomorrow.
The class itself covered a myriad of topics and made me think about technology and cyberspace in a completely new light.  I never thought of the social and political ramifications that technological developments had.  While the course itself did seem to be saturated this view of technology, I was disappointed by the lack of evaluation of the actual technology that we covered.  This may be the programmer in me talking, but I really wanted to explore more what actual technological developments led to certain changes and specific advancements that made the world the way it is today.  That is why I was actually very excited when I heard one of the Vice-Presidents of EA may be addressing the class at some point, as the view of someone directly influencing the technology and the industry that we were talking about would have been riveting.  Overall though, this course made me think of the terminal that I’m sitting behind now to type this blog in a completely new way, and for that I am glad.

Tuesday, April 12, 2011

A Massive Megabyte Migration

In his book Exodus to the Virtual World, Edward Castronova depicts a future where more and more individuals spend time within a virtual world.  Not only will the number of people increase, but the amount of time that they spend within these realities will drastically increase as well.  He notes that as the number of individuals increases it no longer becomes a problem that afflicts a small percentage of the population, but rather all of us.  Castronova argues that the mental stimulation and “fun” within these worlds makes them much more appealing than the world around us may sometime seem.
Now for some of us, this may seem like an odd statement.  How can anyone survive spending all of their time sitting at a console?  Castronova notes that “it does take much to support a human body at a level sufficient to allow the mind to live synthetically.  A room, a bed, a computer, Internet, some food, a toilet” (13).  The hallucination that the game creates is becoming so good that our minds are willing to spend hours and hours there apart from the world that we actually reside in.
Now, many still can’t imagine living cramped in a small room all day looking endlessly into a flickering screen.  They feel that their physical needs for exercise and exertion cannot be satisfied by these kinds of experiences.  Castronova counters this idea by saying that “you have to exercise strenuously in some video games, and […] they are radically transforming daily life” (30-31).  “Interface devices that rely on gross motor skills have already been released.  […] Game attachments to treadmills and exercise bikes are a natural adaptation” (54).
Now, while his evidence supports that fact that a point will come where the simulation will become so great that everyone will wish to spend most of their time there, I still feel that there is still a broad section of our population that will not partake in these games.  Some people cannot get over the need to play actual sports, enjoy the real outdoor, or have face to face companionship.  They may deem that these simulations as noteworthy alternatives for some, but they will never suffice for them.  I feel this is how many today feel about this issue.  Time will only tell if the next generation possesses a similar attitude, or if they will abandon the world of carbon for that of silicone.

Tuesday, April 5, 2011

Project 3 - Exploration Into AI

With the introduction of technology, machines have been able to better and better achieve tasks that we felt before were uniquely human.  Assembly lines have replaced jobs previously held by day laborers, the electronic calculator has rendered the occupation which once shared the same name obsolete, and even human body parts which we once felt were unable to be replicated now have prosthetic doppelgangers.  It is then in this pattern of technology emulating human behavior that the computer science community hurtles towards its next challenge: artificial intelligence.  The idea of a sentient but artificial being is a concept that science has not only embraced, but it is also a controversial concept with social, political, and cultural ramifications that have been explored through many forms of popular media.  The idea that this technology is imminent and obtainable is one which many technological prophets feel, yet there are just as many who counter that this technology is impossible.  While perhaps this concert of artificial intelligence is possible, it is much more likely that we are far from it, if we can obtain it at all as the physical technological barriers that computers face are far too great.  Also, assuming that this technology is somehow achieved, the predictions of its influence on our world range from being a great social enhancement to total species annihilation at the hands of our own creation.  The most feasible outcome is probably not at either extreme, but a more middle ground where this technology helps us but is still controlled from reaching its full potential.
Statue of Alan Turing,
the man that came up with
the Turing Test
To truly understand the complexities of artificial intelligence, one must first understand the question, just what exactly constitutes the definition of AI?  Because the question of intelligence can be subjective, a standardized test has been implemented in the field of computer science to set a bar at which we as humans determine whether or not a mechanical being is intelligent.  This analysis is aptly named the Turing Test after the man who first came up with this experiment in 1950.  Turing proposed the idea where a series of computers and human decoys have a text based terminal conversation with a number of judges, who must then decide whether or not the entity they are conversing with is human.  “Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted”” (Christian).  The consensus is that if a being is able to fool someone into being human for at least five minutes, then it must be intelligent.
But, if a machine does indeed pass this test, which one has yet to do, does this immediately grant it certification as being intelligent?  Well, what do we as a populace define intelligence to be as?  According to the Oxford English Dictionary, intelligence is defined as “the faculty of understanding; intellect” (Intelligence, N.).  Just because this computer is able to imitate an intelligent human conversation, does it really understand what it is saying?  If the user inputs, “You’re funny,” does the machine take this with everything else that this statement implies?  The resulting output doesn’t take into account the compliment or sarcasm the statement may entail.  Even a response of “haha” is nothing more than a predefined output inserted from the programmer.  Even programs which are supposed to learn, such as Cleverbot, simply process input in a rigidly defined way.  The level of pattern matching is the basis for all artificial intelligence, even ones that follow a logic system such as chess bots.  “Yes, a computer can play chess, but no, a computer can’t tell the pawn from the king.  That’s the hard problem.  You give him some Marcel Duchamp chess set where everybody else can tell what the pieces are, he can’t tell” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2).  A computer cannot reason in this instance that even though the pieces look different, the way it should interact with them should still be the same.  It needs to be explicitly told this, while a human can reason that although the pawns may look different, they still have the same function.
An example of a Marcel Duchamp chess set.
Note the abstract shaped pieces.
While these issues and perceptions of intelligence may seem like recent phenomena, AI has its roots in scientific pursuits all the way back to 5th Century B.C. where “Aristotle invent[ed] syllogistic login, the first formal deductive reasoning system” (AITopics / BriefHistory).  It has moved then into the supposed created talking heads of Bacon in the 13th Century to the proposal in the 16th Century by Descartes that “the bodies of animals are nothing more than complex machines” (AITopics / BriefHistory).  Science for a long time has been fascinated with this concept of AI, although it has been more pronounced in recent years.  This may be surprising to some, as,
For a long time, people argued well it gets you into philosophy, its gets you into religion or metaphysics:  things that science by definition cannot deal very well with.  And science wants to deal with concrete things, […] and consciousness in general does not seem to have that character.  But since it’s part of the natural world, […] therefore if […] we want to have a complete and perfect description of the world we cannot skirt around the issue of consciousness  (The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2). 
While the interaction of the human brain seems to be illogical at times, scientists still believe that there must be some logical and structural reason of how our brain works.  Scientists seek to prove that fact that even our irrationality has its basis in rational thinking, even if it occurs in our unconsciousness.
Yet, even with the focus and rapid technological development in recent years, we are still far from AI.  The reason that we cannot get beyond more and more efficient and complex pattern matching algorithms lies in the very way that computers operate.  The largest and most glaring barrier against AI and why it is most likely not even possible is the failure to prove the P vs. NP problem, or the most important unsolved problem in computer science.  Computers process information in a series of algorithms rather than how our brains uniquely process information, and it is in this difference where we experience issues.  “The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic algorithm in polynomial time” (Cook).  An algorithm is accepted within the confines of P if is it is quickly solvable, while an algorithm is NP if it is quickly verifiable.
This concept may seem abstract and confusing at first glance, but it can be simplified through the following example.  Suppose that we desired an algorithm that wanted to output a set of numbers whose sum is equal to zero.  This example would fall into the classification of NP.  This is because while we can easily determine whether or not a given set is equal to zero ({1,2,-1,-2} -> (1+2-1-2 = 0)), we cannot possible return every single possible combination in a given set of time due to the set of real numbers being infinite.  Therefore the algorithm cannot run in fixed polynomial time, but rather a nondeterministic one which makes it fall into NP.  Consequently, the Euclidian algorithm for finding the greatest common devisor of two integers, which is one of the world’s oldest algorithms, runs in polynomial time due to the structure of how it determines its solutions and falls within the confines of P. 

An XKCD Comic depicting The Traveling Salesman problem,
one of many which fall under the classification of NP.

P vs. NP is important because of the nature of the algorithms that would be necessary for an artificial intelligence to occur.  The algorithms that would replicate our thoughts and cognitive thinking would fall within the realms of NP, so in order to allow them to function at all we would have to prove that P = NP.  “P = NP means that for every problem that has an efficiently verifiable solution, we can find that solution efficiently as well” (Fortnow).  A solution to these problems would have to be found, which nobody has currently been able to do.  Our limitations pertaining to the achievement of this technology aren’t just the physical material limitations of circuits or not having fast enough processors.  These are both problems which can easily be solved under Moore’s law which “states that the number of transistors on a chip will double about every two years” (Moore's Law: Made Real by Intel Innovations).  This issue deals with the very nature of computers themselves and a problem that will need to be solved before any real progress can be made.
To understand why this is vital to artificial intelligence, let’s look at both how a computer and a human would seek to solve a propositional logic problem.  For those unfamiliar with propositional logic, it is “a kind of logic in which the fundamental components are whole statements or propositions” (Hurley 680).  A problem is given with a set of axioms and a conclusion that the individual must reach.  In order to prove the given statement, the user is allowed to use a set of rules of implication and rules of replacement, methods in order to rewrite a fact or deduce a new one.  When a computer is presented which such a problem, the method in which it attempts to solve this is a process we in our research have dubbed as “forward chaining”.  The computer simply will run through every single possible permutation of these axioms and instance rules, trying combination after combination until it can find a solution.  While some development has been made to make this process more efficient, it is still an algorithm that runs in NP time.  However, when a human is presented with such a problem, we don’t go through a similar process.  Students in logic courses solve problems like this all the time (relatively) quickly.  That is because we as humans have some kind of notion, some kind of intuition of how to solve this problem.  Therefore, our “algorithm” to solve this problem falls under the classification of P.  The issue here is getting the computer to run algorithms like this, but we have yet to be able to program intuition into a silicon chip.

Futurist Ray Kurzweil

Yet, many are still confident about the impending presence of artificial intelligence.  Futurist Ray Kurzweil who is one of the most reputable theorists about the involvement of technology does believe this is possible.  Not only does he view that this is feasible, but he views our current rate of technological growth to indicate that this will happen very soon.  “In 2029, nanobots will circulate through the bloodstream of the human body to diagnose illnesses” (Kurzweil 307).  However, this is technology that doesn’t require the complexities of AI.  Kurzweil also asserts that “by 2029 […] we will have completed the reverse engineering of the human brain” (Ray Kurzweil Explains the Coming Singularity).  He supports this with the fact that we have already engineered several sections of our brains already, including the cerebral, audio, and visual cortexes.  He claims that this will provide us with all the algorithms needed to simulate human intelligence.  Whether or not this will conflict with the P vs. NP thesis is yet to be seen, as just because we understand the structure of the brain doesn’t necessarily mean that we can use knowledge to develop the necessary algorithms.
It is perhaps this perceived looming of AI that is the cause for its influence upon popular media and culture with their perception and prediction of a future where AI is an everyday technology.  It seems that instead of embracing this possible technology and striving for it, we have taken it upon ourselves to warn us about the dangers of an AI if it ever were to surface.  Numerous pieces of media warn of a dystopian future or a world where the world of man has been completely destroyed by the machines.  This is seen in the acclaimed film The Matrix, where the character of Morpheus explains how the world that humanity now resides in is simply a computer program controlled by the AI who have destroyed the world of humanity and forced us into enslavement.  The fear of not only destruction but also the ramifications of creating life and the idea of “playing God” are also reflected as far back as the early 1800's in the iconic book Frankenstein.  Here, the monster that is created goes on a murderous rampage against those that created him.  He notes, “The nearer I approached to your habitation, the more deeply did I feel the spirit of revenge enkindled in my heart” (Shelly 124).  It is more popular in modern culture to have the advancements possess a connotation of destruction and demise, as some speculate may be keeping with human nature.  People are often perceived as being afraid of something they don’t understand or comprehend, whether it be other cultures, the supernatural, or even the dark.  Since we still don’t have any true idea of how this technology will develop, it may be only natural that we have this feeling associated with AI as well.

Scene from the movie The Matrix

There are still other futurists who warmly embrace this coming of AI.  These individuals note the good that these machines could do and how they could make all of human life easier by continuing the pattern of technology replacing humans in various tasks.  They cite the notion of the singularity.  “Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds” (What Is the Singularity?).  This process would occur almost instantaneously after the computer reaches that initial threshold, creating a machine that will far exceed the limits of the human mind.  It can achieve things that we never could physically do, and therefore can make improvements that we could not consider or achieve.
However, our preconceived notions about AI will leave us in a future that is somewhere between these utopias and dystopias.  Because of the influence of mass media, we as a people are obviously cautious about the power that an AI could have.  If an intelligence was ever even close to being physically achieved, it is more than likely that limits and safety mechanisms will be hardwired in so that we are still in control of the influence of the machine.  This should prevent the disaster seen in movies such as The Matrix.  However, these limits would also hinder the potential of AI, making it unable to grow and have the ability dreamed of by the positive futurists.  By putting any restraints on its intelligence, the idea of the singularity becomes the impossible.  While the novel by William Gibson, Neuromancer, does indeed portray a dystopian technological future, its premise of AI trapped within the confines set up for them by humanity may indeed be accurate if the notion of AI was ever to indeed come to fruition.
As we progress to a more and more technological society, the question of AI will become a bigger and bigger issue.  While currently artificial intelligence is simply not possible due to technological limitation and P vs. NP, there may be ways around this and unknown solutions to be discovered.  While this is unlikely and a true artificial sentient being will more than likely will never come into being on our world, it is unknown how we as a society will handle it.  The issue is that we have never dealt with another form of life other than our own before.  How will we treat it?  If we consider the discovery of ancient cultures and societies of each other, one of two options will emerge.  We will either live in harmonious unity with our created life, or like European settlers who instead chose to destroy the natives, one form of life may have to make room for the other.

Works Cited
"AITopics / BriefHistory." Association for the Advancement of Artificial Intelligence. Web. 16 Mar. 2011. <http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/BriefHistory>.
Christian, Brian. "Mind vs. Machine." The Atlantic (2011). Web. 27 Mar. 2011. <http://www.theatlantic.com/magazine/archive/1969/12/mind-vs-machine/8386/>.
Cook, Stephen. "The P versus NP Problem." The Clay Mathematics Institute. Web. 16 Mar. 2011. <http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf>.
Fortnow, Lance. "The Status of the P versus NP Problem." The University of Chicago. Web. 16 Nov. 2010. <http://people.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=Q2JD5xg6weE>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=QN1l5e1yamU>.
Hurley, Patrick J.  A Concise Introduction to Logic.  10th ed.  Belmont:  Thomas Wadsworth, 2008.  Print.
"Intelligence, N." Oxford English Dictionary. Oxford University Press, Nov. 2010. Web. 28 Mar. 2011. <http://tinyurl.com/6dbxo42>.
Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking, 1999. Print.
“Moore's Law: Made Real by Intel Innovations." Intel. Web. 28 Mar. 2011. <http://www.intel.com/technology/mooreslaw/>.
"Ray Kurzweil Explains the Coming Singularity." YouTube. 28 Apr. 2009. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=1uIzS1uCOcE>.
Shelley, Mary W., and Johanna M. Smith. Frankenstein. Boston [etc.: Bedford-St. Martin's, 2000. Print.
"What Is the Singularity?" Singularity Institute for Artificial Intelligence. Web. 28 Mar. 2011. <http://singinst.org/overview/whatisthesingularity>.

Sunday, April 3, 2011

Project 4 Script

“What are the odds that you think it’ll work?” inquired Jacquard.  “I can’t really make an accurate prediction,” Babbage responded, “When put into an atmosphere like this, you’ll never really know how they’ll respond.”  His doubt was justified.  Even with year being 2142, it was yet that anyone could develop intelligence that could surpass the rigor of the Modern Turing Test. 
It was truly how amazing how far the test had progressed since its conception by the computer scientist Albert Turing in 1950.  Even the few number of terminals and primitive participants of tests that dated in the mid two thousands paled in comparison to the scale and complexity of the setup today.  Five minutes may have sufficed for a standard for the tests of a century ago, but now it was restricted to simply 5000 interactions.  Granted, this still only took a few seconds, the standard of human interaction with a keyboard was no longer needed since output could be gathered through the network receptor that was now hardwired into every human brain.
For Babbage and Jacquard this was their fourth entry into the annual competition, and both computed that their chances of success compared to previous years had increased dramatically.  While structurally the human brain had been completely reconstructed years ago, this wasn’t enough.  In order to create a contestant for the Modern Turing Test, it was important how these connections actually worked. 
“It’s started,” Jacquard alerted Babbage.  Both tracked the conversation as it happened, each monitoring the output presented by the query.     Greetings were met with compliments.  Historical events were mentioned and discussed.  Even conversation that appeared to exchange emotion were observed.  However, the dreaded query involving Ackermann’s function was inevitable posed.  Both watch as higher and higher values of the mathematical function were posed, each time returned by a correct integer answer.
A(4,2) was posed, and both realized that it was over.  Sure enough, computing the value of the 19,729 digit number proved to be too great for the subject.  Both artificial intelligences looked over at the human subject, sitting exhausted in a chair in the tests containment room after attempting to calculate the number.  Despite the fact that the pair did obtain the “Most Machine” prize, it turned out the synthetic enhancement to their subject’s brain simply wasn’t enough.  “Do you calculate it to be possible,” Jacquard inquired, “that we’ll ever be able to create human intelligence that is comparable to us machines?”

Tuesday, March 29, 2011

The Appetite of Big Business

In the future, the world will no longer see government as the main authoritarian forces within our planet.  Rather, the executives from Google, Microsoft, Verizon, and other communication and technology companies will be the ones sitting around a version of the United Nations Table discussing plans for the people that inhabit the Earth.  At least, that’s what almost every piece of futuristic literature we’ve read this semester supports that.  Wu told us about the danger of monopolies and moguls, Gibson painted a world controlled by the Zibatsus, and now M.T. Anderson’s novel Feed has a world controlled by the technology corporations.
However, instead of Wu’s version where the modern day people are uneducated about the evils of modern corporations or Neuromancer where they have created desolate areas in the world, the people within Feed knowingly accept the dangers and evils proposed by these big businesses.  “We all know they control everything.  I mean, it’s not great, because who knows what evil shit they’re up to” (Anderson 48).  These people have knowingly let the corporations become this big.  However, their sole reasoning behind this isn’t simply because they keep perpetually all the world employed.
No, the people in Feed tolerate this because they are dependent on the technology that these corporation output.  We see the result if they are disconnected from it, and their only way to keep this technology is to accept the role that these monopolies have and the power that they exert over them.  What’s more, their consumption of technology actually enabled these business to become as powerful as they are, and yet everyone seems to be perfectly fine with that as long as they get their electronic addition fix.
What’s scary is perhaps how we as a culture may be headed down a similar road.  While we are nowhere as bad as this dystopia, we do rely heavily on technology.  Because of this, these companies have an enormous amount of control over us.  If a company like Apple tells us that a product is cool and we need it, no matter its technical flaws or failure to make any real innovation (*cough* iPad *cough*), millions of people rush out to get it.  Imagine how Google could control us if it started to arbitrarily block content.  Yet what would we do?  Most likely just accept it and move on if it were gradual enough.  It’s scary to think that the roots of these future may already lay within society today.

Monday, March 28, 2011

Project 3 - Exploration into AI

With the introduction of technology, machines have been able to better and better achieve tasks that we felt before were uniquely human.  Assembly lines have replaced jobs previously held by day laborers, the electronic calculator has rendered the occupation which once shared the same name obsolete, and even human body parts which we once felt were unable to be replicated now have prosthetic doppelgangers.  It is then in this pattern of technology emulating human behavior that the computer science community hurtles towards its next challenge: artificial intelligence.  The idea of a sentient but artificial being is a concept that not only science has embraced, but it is also a controversial concept who’s social, political, and cultural ramifications have been explored through many forms of popular media.  The idea that this technology is imminent and obtainable is one which many technological prophets feel, yet there are just as many who counter that this technology is impossible.  While perhaps this concert of artificial intelligence is possible, it is much more likely that we are far from it if we can obtain it at all as the physical technological barriers that computers face are far too great.  Also, assuming that this technology is somehow achieved, the predictions of its influence on our world range from being a great social enhancement to total species annihilation at the hands of our own creation.  The most feasible outcome is probably not at either extreme, but a more middle ground where this technology helps us but is still controlled from reaching its full potential.
To truly understand the complexities of artificial intelligence, one must first understand the question, just what exactly constitutes the definition of AI?  Because the question of intelligence can be subjective, a standardized test has been implemented in the field of computer science to set a bar at which we as humans determine whether or not a mechanical being is intelligent.  This analysis is aptly named the Turing Test after the man who first came up with this experiment in 1950.  Turing proposed the idea where a series of computers and human decoys have a text based terminal conversation with a number of judges, who must therefore decide whether or not the entity they are conversing with is human or not.  “Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted”” (Christian).  The consensus is that if a being is able to fool someone into being human for at least five minutes, then it must be intelligent.
But, if a machine does indeed pass this test, which one has yet to do, does this immediately grant it certification as being intelligent?  Well, what do we as a populace define intelligence to be as?  According to the Oxford English Dictionary, intelligence is defined as “the faculty of understanding; intellect” (Intelligence, N.).  Just because this computer is able to imitate an intelligent human conversation, does it really understand what it is saying?  If the user inputs, “You’re funny,” does the machine take this with everything else that this statement implies?  The resulting output doesn’t take into account the compliment or sarcasm the statement may entail.  Even a response of “haha” is nothing more than a predefined output inserted from the programmer.  Even programs which are supposed to learn, such as CleverBot,  simply process input in a rigidly defined way.  The level of pattern matching is the basis for all artificial intelligence, even ones that follow a logic system such as chess bots.  “Yes, a computer can play chess, but no, a computer can’t tell the pawn from the king.  That’s the hard problem.  You give him some Marcel Duchamp chess set where everybody else can tell what the pieces are, he can’t tell” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2).
While these issues and perceptions of intelligence may seem like recent phenomena, AI has its roots in scientific pursuits all the way back to 5th Century B.C. where “Aristotle invent syllogistic login, the first formal deductive reasoning system” (AITopics / BriefHistory).  It has moved then into the supposed created talking heads of Bacon in the 13th Century to the proposal in the 16th Century by Descartes that “the bodies of animals are nothing more than complex machines” (AITopics / BriefHistory).  Science for a long time has been fascinated with this concept of AI, although it has been more pronounced in recent years.  This may be surprising to some, as “for a long time, people argued well it gets you into philosophy, its gets you into religion or metaphysics:  things that science by definition cannot deal very well with.  And science wants to deal with concrete things, […] and consciousness in general does not seem to have that character.  But since it’s part of the natural world, […] therefore if […] we want to have a complete and perfect description of the world we cannot skirt around the issue of consciousness” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2).
Yet, even with the focus and rapid technological development in recent years, we are still far from AI.  The reason that we cannot get beyond more and more efficient and complex pattern matching algorithms lies in the very way that computers operate.  The largest and most glaring barrier against AI and why it is most likely not even possible is the failure to prove the P vs. NP problem, or the most important unsolved problem in computer science.  Computers process information in a series of algorithms rather than how our brains uniquely process information, and it is in this difference where we experience issues.  “The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic algorithm in polynomial time” (Cook).  An algorithm is accepted within the confines of P if is it is quickly solvable, while an algorithm is NP if it is quickly verifiable.
This concept may seem abstract and confusing at first glance, but it can be simplified through the following example.  Suppose that we desired an algorithm that wanted to output a set of numbers whose sum is equal to zero.  This example would fall into the classification of NP.  This is because while we can easily determine whether or not a given set is equal to zero ({1,2,-1,-2} -> (1+2-1-2 = 0)), we cannot possible return every single possible combination in a given set of time due to the set of real numbers being infinite.  Therefore the algorithm cannot run in fixed polynomial time, but rather a nondeterministic one which makes it fall into NP.  Consequently, the Euclidian algorithm for finding the greatest common devisor of two integers, which is one of the world’s oldest algorithms, runs in polynomial time due to the structure of how it determines its solutions and falls within the confines of P. 
P vs. NP is important because of the nature of the algorithms that would be necessary for an artificial intelligence to occur.  The algorithms that would replicate our thoughts and cognitive thinking would fall within the realms of NP, so in order to allow them to function at all we would have to prove that P = NP.  “P = NP means that for every problem that has an efficiently verifiable solution, we can find that solution efficiently as well” (Fortnow).  A solution to these problems would have to be found, which nobody has currently been able to do.  Our limitations pertaining to the achievement of this technology aren’t just the physical material limitations of circuits or not having fast enough processors.  These are both problems which can easily be solved under Moore’s law which “states that the number of transistors on a chip will double about every two years” (Moore's Law: Made Real by Intel Innovations).  This issue deals with the very nature of computers themselves and a problem that will need to be solved before any real progress can be made.
Yet, many are still confident about the impending presence of artificial intelligence.  Futurist Ray Kurzweil who is one of the most reputable theorists about the involvement of technology does believe this is possible.  Not only does he view that this is feasible, but he views our current rate of technological growth to indicate that this will happen very soon.  “In 2029, nanobots will circulate through the bloodstream of the human body to diagnose illnesses” (Kurzweil 307).  However, this is technology that doesn’t require the complexities of AI.  Kurzweil also asserts that “by 2029 […] we will have completed the reverse engineering of the human brain” (Ray Kurzweil Explains the Coming Singularity).  He supports this with the fact that we have already engineered several sections of our brains already, including the cerebral, audio, and visual cortexes.  He claims that this will provide us with all the algorithms needed to simulate human intelligence.  Whether or not this will conflict with the P vs. NP thesis is yet to be seen.
It is perhaps this perceived looming of AI that is the cause for its influence upon popular media and culture with their perception and prediction of a future where AI is an everyday technology.  It seems that instead of embracing this possible technology and striving for it, we have taken it upon ourselves to warn us about the dangers of an AI if it ever were to surface.  Numerous pieces of media warn of a dystopian future or a world where the world of man has been completely destroyed by the machines.  This is seen in the acclaimed film, The Matrix, where the character of Morpheus explains how the world that humanity now resides in is simply a computer program controlled by the AI who have destroyed the world of humanity and forced us to enslavement.  The fear of not only destruction but also the ramifications of creating life and the idea of “playing God” are also reflected as far back as the early 1800's in the iconic book Frankenstein.  Here, the monster that is created goes on a murderous rampage against those that created him.  He notes, “The nearer I approached to your habitation, the more deeply did I feel the spirit of revenge enkindled in my heart” (Shelly 124).
There are still other futurists who warmly embrace this coming of AI.  These individuals note the good that these machines could do and how they could make all of human life easier by continuing the pattern of technology replacing humans in various tasks.  They cite the notion of the singularity.  “Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds” (What Is the Singularity?).  This process would occur almost instantaneously after the computer reaches that initial threshold, creating a machine that will far exceed the limits of the human mind.  It can achieve things that we never could physically do, and therefore can make improvements that we could not consider or achieve.
However, our preconceived notions about AI will leave us in a future that is somewhere between these utopias and dystopias.  Because of the influence of mass media, we as a people are obviously cautious about the power that an AI could have.  If an intelligence was ever even close to being physically achieved, it is more than likely that limits and safety mechanosms will be hardwired in so that we are still in control of the influence of the machine.  This should prevent the disaster seen in movies such as The Matrix.  However, these limits would also hinder the potential of AI, making it unable to grow and have the ability dreamed of by the positive futurists.  By putting any restraints on its intelligence, the idea of the singularity becomes the impossible.  While the novel by William Gibson, Neuromancer, does indeed portray a dystopian technological future, its premise of AI trapped within the confines set up for them by humanity may indeed be accurate if the notion of AI was ever to indeed come to fruition.
As we progress to a more and more technological society, the question of AI will become a bigger and bigger issue.  While currently artificial intelligence is simply not possible due to technological limitation and P vs. NP, there may be ways around this and unknown solutions to be discovered.  While this is unlikely and a true artificial sentient being will more than likely will never come into being on our world, it is unknown how we as a society will handle it.  The issue is that we have never dealt with another form of life other than our own before.  How will we treat it?  If we consider the discovery of ancient cultures and societies of each other, one of two options will emerge.  We will either live in harmonious unity with our created life, or like European settlers who instead chose to destroy the natives, one form of life may have to make room for the other.

Works Cited
"AITopics / BriefHistory." Association for the Advancement of Artificial Intelligence. Web. 16 Mar. 2011. <http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/BriefHistory>.
Christian, Brian. "Mind vs. Machine." The Atlantic (2011). Web. 27 Mar. 2011. <http://www.theatlantic.com/magazine/archive/1969/12/mind-vs-machine/8386/>.
Cook, Stephen. "The P versus NP Problem." The Clay Mathematics Institute. Web. 16 Mar. 2011. <http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf>.
Fortnow, Lance. "The Status of the P versus NP Problem." The University of Chicago. Web. 16 Nov. 2010. <http://people.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=Q2JD5xg6weE>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=QN1l5e1yamU>.
"Intelligence, N." Oxford English Dictionary. Oxford University Press, Nov. 2010. Web. 28 Mar. 2011. <http://tinyurl.com/6dbxo42>.
Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking, 1999. Print.
“Moore's Law: Made Real by Intel Innovations." Intel. Web. 28 Mar. 2011. <http://www.intel.com/technology/mooreslaw/>.
"Ray Kurzweil Explains the Coming Singularity." YouTube. 28 Apr. 2009. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=1uIzS1uCOcE>.
Shelley, Mary W., and Johanna M. Smith. Frankenstein. Boston [etc.: Bedford-St. Martin's, 2000. Print.
"What Is the Singularity?" Singularity Institute for Artificial Intelligence. Web. 28 Mar. 2011. <http://singinst.org/overview/whatisthesingularity>.

Tuesday, March 22, 2011

You Only Know that You Are, if You Do Not Know that You Do Not Know, if You Are or Are Not

The concept and abnormalities of reality aren’t a particularly new phenomenon.  Almost every form of popular media, ranging from Plato’s Allegory of the Cave, to Darl’s statement (aka the title of this post) in Faulkner’s As I Lay Dying, to modern movies such as Inception have explored the conundrum of our current perception of reality may not actually be reality itself.  One of the ways in which this is expertly presented in The Wachowski Brother’s film, The Matrix.  In one of the defining scenes of the movie, Neo is presented with the choice to take either a blue pill or a red pill, one leaving him in the world that he knows and the other revealing the truth about exactly where he resides.
The question that is inherently imparted upon the viewer of this scene is that if you were given the choice, would you take the path towards the truth or would you be content with the world that you currently live in?  Some reason the ignorance is truly bliss, and would happily reside with their current known status.  Taking this risk aversion principal taught in almost every macroeconomics class, they reason that the known is better than the unknown, and the risk of a potential horrible reality outweighs the utopian that has a chance for existence outside of its walls.  There are still others who value knowledge as the most precious commodity of all, and would happily accept the walls outside The Matrix.  For them, the artificial simulation, no matter how accurate, can ever satisfy, and only through the truth, no matter how dark it is, can they gain any freedom. 
The question then parallels this one:  If you could obtain and understand all knowledge, all information, all events in our universe, would you?  Would you be willing to accept the bad, evil knowledge that would come along with the good?  Would you happily give up the joy of the pursuit of enlightenment to have all you ever wanted to know and so much more spread out on the table before you?  If I was facing the choice Neo had to make, I feel I would follow his decision and abandon the womb of The Matrix.  I don’t know if I could stand not knowing what lied beyond those digital walls, no matter how bad it was.  Plus, even if there was a future as bleak and grim as portrayed here, at least it would come with a cool black outfit and bullet time.