{

Tuesday, March 29, 2011

The Appetite of Big Business

In the future, the world will no longer see government as the main authoritarian forces within our planet.  Rather, the executives from Google, Microsoft, Verizon, and other communication and technology companies will be the ones sitting around a version of the United Nations Table discussing plans for the people that inhabit the Earth.  At least, that’s what almost every piece of futuristic literature we’ve read this semester supports that.  Wu told us about the danger of monopolies and moguls, Gibson painted a world controlled by the Zibatsus, and now M.T. Anderson’s novel Feed has a world controlled by the technology corporations.
However, instead of Wu’s version where the modern day people are uneducated about the evils of modern corporations or Neuromancer where they have created desolate areas in the world, the people within Feed knowingly accept the dangers and evils proposed by these big businesses.  “We all know they control everything.  I mean, it’s not great, because who knows what evil shit they’re up to” (Anderson 48).  These people have knowingly let the corporations become this big.  However, their sole reasoning behind this isn’t simply because they keep perpetually all the world employed.
No, the people in Feed tolerate this because they are dependent on the technology that these corporation output.  We see the result if they are disconnected from it, and their only way to keep this technology is to accept the role that these monopolies have and the power that they exert over them.  What’s more, their consumption of technology actually enabled these business to become as powerful as they are, and yet everyone seems to be perfectly fine with that as long as they get their electronic addition fix.
What’s scary is perhaps how we as a culture may be headed down a similar road.  While we are nowhere as bad as this dystopia, we do rely heavily on technology.  Because of this, these companies have an enormous amount of control over us.  If a company like Apple tells us that a product is cool and we need it, no matter its technical flaws or failure to make any real innovation (*cough* iPad *cough*), millions of people rush out to get it.  Imagine how Google could control us if it started to arbitrarily block content.  Yet what would we do?  Most likely just accept it and move on if it were gradual enough.  It’s scary to think that the roots of these future may already lay within society today.

Monday, March 28, 2011

Project 3 - Exploration into AI

With the introduction of technology, machines have been able to better and better achieve tasks that we felt before were uniquely human.  Assembly lines have replaced jobs previously held by day laborers, the electronic calculator has rendered the occupation which once shared the same name obsolete, and even human body parts which we once felt were unable to be replicated now have prosthetic doppelgangers.  It is then in this pattern of technology emulating human behavior that the computer science community hurtles towards its next challenge: artificial intelligence.  The idea of a sentient but artificial being is a concept that not only science has embraced, but it is also a controversial concept who’s social, political, and cultural ramifications have been explored through many forms of popular media.  The idea that this technology is imminent and obtainable is one which many technological prophets feel, yet there are just as many who counter that this technology is impossible.  While perhaps this concert of artificial intelligence is possible, it is much more likely that we are far from it if we can obtain it at all as the physical technological barriers that computers face are far too great.  Also, assuming that this technology is somehow achieved, the predictions of its influence on our world range from being a great social enhancement to total species annihilation at the hands of our own creation.  The most feasible outcome is probably not at either extreme, but a more middle ground where this technology helps us but is still controlled from reaching its full potential.
To truly understand the complexities of artificial intelligence, one must first understand the question, just what exactly constitutes the definition of AI?  Because the question of intelligence can be subjective, a standardized test has been implemented in the field of computer science to set a bar at which we as humans determine whether or not a mechanical being is intelligent.  This analysis is aptly named the Turing Test after the man who first came up with this experiment in 1950.  Turing proposed the idea where a series of computers and human decoys have a text based terminal conversation with a number of judges, who must therefore decide whether or not the entity they are conversing with is human or not.  “Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result, one would “be able to speak of machines thinking without expecting to be contradicted”” (Christian).  The consensus is that if a being is able to fool someone into being human for at least five minutes, then it must be intelligent.
But, if a machine does indeed pass this test, which one has yet to do, does this immediately grant it certification as being intelligent?  Well, what do we as a populace define intelligence to be as?  According to the Oxford English Dictionary, intelligence is defined as “the faculty of understanding; intellect” (Intelligence, N.).  Just because this computer is able to imitate an intelligent human conversation, does it really understand what it is saying?  If the user inputs, “You’re funny,” does the machine take this with everything else that this statement implies?  The resulting output doesn’t take into account the compliment or sarcasm the statement may entail.  Even a response of “haha” is nothing more than a predefined output inserted from the programmer.  Even programs which are supposed to learn, such as CleverBot,  simply process input in a rigidly defined way.  The level of pattern matching is the basis for all artificial intelligence, even ones that follow a logic system such as chess bots.  “Yes, a computer can play chess, but no, a computer can’t tell the pawn from the king.  That’s the hard problem.  You give him some Marcel Duchamp chess set where everybody else can tell what the pieces are, he can’t tell” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2).
While these issues and perceptions of intelligence may seem like recent phenomena, AI has its roots in scientific pursuits all the way back to 5th Century B.C. where “Aristotle invent syllogistic login, the first formal deductive reasoning system” (AITopics / BriefHistory).  It has moved then into the supposed created talking heads of Bacon in the 13th Century to the proposal in the 16th Century by Descartes that “the bodies of animals are nothing more than complex machines” (AITopics / BriefHistory).  Science for a long time has been fascinated with this concept of AI, although it has been more pronounced in recent years.  This may be surprising to some, as “for a long time, people argued well it gets you into philosophy, its gets you into religion or metaphysics:  things that science by definition cannot deal very well with.  And science wants to deal with concrete things, […] and consciousness in general does not seem to have that character.  But since it’s part of the natural world, […] therefore if […] we want to have a complete and perfect description of the world we cannot skirt around the issue of consciousness” (The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2).
Yet, even with the focus and rapid technological development in recent years, we are still far from AI.  The reason that we cannot get beyond more and more efficient and complex pattern matching algorithms lies in the very way that computers operate.  The largest and most glaring barrier against AI and why it is most likely not even possible is the failure to prove the P vs. NP problem, or the most important unsolved problem in computer science.  Computers process information in a series of algorithms rather than how our brains uniquely process information, and it is in this difference where we experience issues.  “The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some (deterministic algorithm in polynomial time” (Cook).  An algorithm is accepted within the confines of P if is it is quickly solvable, while an algorithm is NP if it is quickly verifiable.
This concept may seem abstract and confusing at first glance, but it can be simplified through the following example.  Suppose that we desired an algorithm that wanted to output a set of numbers whose sum is equal to zero.  This example would fall into the classification of NP.  This is because while we can easily determine whether or not a given set is equal to zero ({1,2,-1,-2} -> (1+2-1-2 = 0)), we cannot possible return every single possible combination in a given set of time due to the set of real numbers being infinite.  Therefore the algorithm cannot run in fixed polynomial time, but rather a nondeterministic one which makes it fall into NP.  Consequently, the Euclidian algorithm for finding the greatest common devisor of two integers, which is one of the world’s oldest algorithms, runs in polynomial time due to the structure of how it determines its solutions and falls within the confines of P. 
P vs. NP is important because of the nature of the algorithms that would be necessary for an artificial intelligence to occur.  The algorithms that would replicate our thoughts and cognitive thinking would fall within the realms of NP, so in order to allow them to function at all we would have to prove that P = NP.  “P = NP means that for every problem that has an efficiently verifiable solution, we can find that solution efficiently as well” (Fortnow).  A solution to these problems would have to be found, which nobody has currently been able to do.  Our limitations pertaining to the achievement of this technology aren’t just the physical material limitations of circuits or not having fast enough processors.  These are both problems which can easily be solved under Moore’s law which “states that the number of transistors on a chip will double about every two years” (Moore's Law: Made Real by Intel Innovations).  This issue deals with the very nature of computers themselves and a problem that will need to be solved before any real progress can be made.
Yet, many are still confident about the impending presence of artificial intelligence.  Futurist Ray Kurzweil who is one of the most reputable theorists about the involvement of technology does believe this is possible.  Not only does he view that this is feasible, but he views our current rate of technological growth to indicate that this will happen very soon.  “In 2029, nanobots will circulate through the bloodstream of the human body to diagnose illnesses” (Kurzweil 307).  However, this is technology that doesn’t require the complexities of AI.  Kurzweil also asserts that “by 2029 […] we will have completed the reverse engineering of the human brain” (Ray Kurzweil Explains the Coming Singularity).  He supports this with the fact that we have already engineered several sections of our brains already, including the cerebral, audio, and visual cortexes.  He claims that this will provide us with all the algorithms needed to simulate human intelligence.  Whether or not this will conflict with the P vs. NP thesis is yet to be seen.
It is perhaps this perceived looming of AI that is the cause for its influence upon popular media and culture with their perception and prediction of a future where AI is an everyday technology.  It seems that instead of embracing this possible technology and striving for it, we have taken it upon ourselves to warn us about the dangers of an AI if it ever were to surface.  Numerous pieces of media warn of a dystopian future or a world where the world of man has been completely destroyed by the machines.  This is seen in the acclaimed film, The Matrix, where the character of Morpheus explains how the world that humanity now resides in is simply a computer program controlled by the AI who have destroyed the world of humanity and forced us to enslavement.  The fear of not only destruction but also the ramifications of creating life and the idea of “playing God” are also reflected as far back as the early 1800's in the iconic book Frankenstein.  Here, the monster that is created goes on a murderous rampage against those that created him.  He notes, “The nearer I approached to your habitation, the more deeply did I feel the spirit of revenge enkindled in my heart” (Shelly 124).
There are still other futurists who warmly embrace this coming of AI.  These individuals note the good that these machines could do and how they could make all of human life easier by continuing the pattern of technology replacing humans in various tasks.  They cite the notion of the singularity.  “Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds” (What Is the Singularity?).  This process would occur almost instantaneously after the computer reaches that initial threshold, creating a machine that will far exceed the limits of the human mind.  It can achieve things that we never could physically do, and therefore can make improvements that we could not consider or achieve.
However, our preconceived notions about AI will leave us in a future that is somewhere between these utopias and dystopias.  Because of the influence of mass media, we as a people are obviously cautious about the power that an AI could have.  If an intelligence was ever even close to being physically achieved, it is more than likely that limits and safety mechanosms will be hardwired in so that we are still in control of the influence of the machine.  This should prevent the disaster seen in movies such as The Matrix.  However, these limits would also hinder the potential of AI, making it unable to grow and have the ability dreamed of by the positive futurists.  By putting any restraints on its intelligence, the idea of the singularity becomes the impossible.  While the novel by William Gibson, Neuromancer, does indeed portray a dystopian technological future, its premise of AI trapped within the confines set up for them by humanity may indeed be accurate if the notion of AI was ever to indeed come to fruition.
As we progress to a more and more technological society, the question of AI will become a bigger and bigger issue.  While currently artificial intelligence is simply not possible due to technological limitation and P vs. NP, there may be ways around this and unknown solutions to be discovered.  While this is unlikely and a true artificial sentient being will more than likely will never come into being on our world, it is unknown how we as a society will handle it.  The issue is that we have never dealt with another form of life other than our own before.  How will we treat it?  If we consider the discovery of ancient cultures and societies of each other, one of two options will emerge.  We will either live in harmonious unity with our created life, or like European settlers who instead chose to destroy the natives, one form of life may have to make room for the other.

Works Cited
"AITopics / BriefHistory." Association for the Advancement of Artificial Intelligence. Web. 16 Mar. 2011. <http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/BriefHistory>.
Christian, Brian. "Mind vs. Machine." The Atlantic (2011). Web. 27 Mar. 2011. <http://www.theatlantic.com/magazine/archive/1969/12/mind-vs-machine/8386/>.
Cook, Stephen. "The P versus NP Problem." The Clay Mathematics Institute. Web. 16 Mar. 2011. <http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf>.
Fortnow, Lance. "The Status of the P versus NP Problem." The University of Chicago. Web. 16 Nov. 2010. <http://people.cs.uchicago.edu/~fortnow/papers/pnp-cacm.pdf>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 1 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=Q2JD5xg6weE>.
"The Hard Problem of AI: Selection from Matrix Science Documentary Part 2 of 2." YouTube. 27 July 2010. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=QN1l5e1yamU>.
"Intelligence, N." Oxford English Dictionary. Oxford University Press, Nov. 2010. Web. 28 Mar. 2011. <http://tinyurl.com/6dbxo42>.
Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking, 1999. Print.
“Moore's Law: Made Real by Intel Innovations." Intel. Web. 28 Mar. 2011. <http://www.intel.com/technology/mooreslaw/>.
"Ray Kurzweil Explains the Coming Singularity." YouTube. 28 Apr. 2009. Web. 28 Mar. 2011. <http://www.youtube.com/watch?v=1uIzS1uCOcE>.
Shelley, Mary W., and Johanna M. Smith. Frankenstein. Boston [etc.: Bedford-St. Martin's, 2000. Print.
"What Is the Singularity?" Singularity Institute for Artificial Intelligence. Web. 28 Mar. 2011. <http://singinst.org/overview/whatisthesingularity>.

Tuesday, March 22, 2011

You Only Know that You Are, if You Do Not Know that You Do Not Know, if You Are or Are Not

The concept and abnormalities of reality aren’t a particularly new phenomenon.  Almost every form of popular media, ranging from Plato’s Allegory of the Cave, to Darl’s statement (aka the title of this post) in Faulkner’s As I Lay Dying, to modern movies such as Inception have explored the conundrum of our current perception of reality may not actually be reality itself.  One of the ways in which this is expertly presented in The Wachowski Brother’s film, The Matrix.  In one of the defining scenes of the movie, Neo is presented with the choice to take either a blue pill or a red pill, one leaving him in the world that he knows and the other revealing the truth about exactly where he resides.
The question that is inherently imparted upon the viewer of this scene is that if you were given the choice, would you take the path towards the truth or would you be content with the world that you currently live in?  Some reason the ignorance is truly bliss, and would happily reside with their current known status.  Taking this risk aversion principal taught in almost every macroeconomics class, they reason that the known is better than the unknown, and the risk of a potential horrible reality outweighs the utopian that has a chance for existence outside of its walls.  There are still others who value knowledge as the most precious commodity of all, and would happily accept the walls outside The Matrix.  For them, the artificial simulation, no matter how accurate, can ever satisfy, and only through the truth, no matter how dark it is, can they gain any freedom. 
The question then parallels this one:  If you could obtain and understand all knowledge, all information, all events in our universe, would you?  Would you be willing to accept the bad, evil knowledge that would come along with the good?  Would you happily give up the joy of the pursuit of enlightenment to have all you ever wanted to know and so much more spread out on the table before you?  If I was facing the choice Neo had to make, I feel I would follow his decision and abandon the womb of The Matrix.  I don’t know if I could stand not knowing what lied beyond those digital walls, no matter how bad it was.  Plus, even if there was a future as bleak and grim as portrayed here, at least it would come with a cool black outfit and bullet time.

Wednesday, March 16, 2011

To Speak is not To Think


For years now, programmers constructing the utopian machine known as artificial intelligence have been programming their code, checking their hardware, developing all this technology for a chance to past the Turing Test.  If these machines can pass a five minute conversation with 30% of its observers thinking it is human, then by the Turing Test, this machine is deemed “intelligent”.  Myriad amounts of pattern matching algorithms and notes on human linguistics have been programmed into these machines, trying to imitate the design of human speech.
But what if a machine was able to pass the Turing Test?   No, better yet, what if a machine was able to completely dominate the test.  If it fooled 100% of those interacting with it, it must be intelligent.  Right?  Well, let’s look at the test itself.  All these computers have to do is carry on a 5 minute conversation.  Now, I know that doing this convincingly is a daunting task, but the way that these computers structure their conversation is the important part.
There is no learning from the machine.  This learning, at least in my opinion, is what makes us as humans intelligent.  All of this pattern matching is predetermined code programmed by not a machine, but a human.  Granted, conversation is a task that requires an incomprehensible amount of this pattern matching, but it is still pattern matching none the less.  Even Cleverbot, which is supposed to “learn” by the responses generated by its users, simply inserts these responses into an impressible, but still static algorithm.
No, the day that artificial intelligence, at least to me, will be truly realized is not when machines can simply carry on a conversation.  What is language but a series of patterns anyway?  Intelligence is defined by learning, and we as people understand concepts, ideas, creeds, beliefs, and the world around us through this.  A machine, one it can do this, may be intelligent, but until that day, it is still just a machine.

Exploration into AI - Annotated Bibliography for Project 3

Being a computer science major, I often gain the enjoyment of playing with the code that makes these machines tick. Wanting to both bolster my resume and continue to enhance my knowledge of this field, I made it a goal to participate in research as early as possible. I managed to land a position working on a project with Dr. Arthur Charlesworth on artificial intelligence research which produced predicate logic games that were not only fundamentally correct, but also appealed to and were interesting to the human mind. Through my education while preparing for the research and also through a paper I wrote for my other FYS last semester exploring P vs. NP, I learned just how hard, and perhaps truly impossible it is, to create a machine with true "intelligence".


This is why I was so struck with the literature that this course provided. Repeatedly, especially in novels like Gibson’s Neuromancer, AI is not only prevalent, but highly advanced enough to an almost truly sentient state. The ease at which this seemed to be achieved conflicts with most of what I have personally learned about the subject, and it begs a few questions. Just how possible is it to create an artificial intelligence? Just how close are we? Are the machines and programs created now dubbed AI truly "intelligent"? Also, assuming that this technology is possible, what are the social, political, military, and cultural ramifications of AI? How accepting will society be of this technology? And, if an AI is deleted, are we actually killing something? These are just some of the areas of the topic that I may explore.


Going forward in this project, there may be some difficulties. Mainly, the reports and technical reasoning behind the possibility of AI will undoubtedly be filled with complex concepts and ideas which at times may even be hard for me to understand. I will face the challenge of being able to present this data to the target audience, a reader who probably doesn’t have this level of technical expertise, in an accessible and also interesting way. Also, because this topic will focus more on future events rather than past, I will have to rely heavily on academic conjecture rather than history. Because of the controversy surrounding the topic, I may have to wade through many confliction opinions before I can form my own.


Annotated Bibliography


Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York: Viking, 1999. Print.


This novel by Kurzweil examines his predictions about the future of artificial intelligence and how computers as a whole will be used in the future to benefit humankind. However, Kurzweil will also touch on the possible dangers that AI could bring. Rather than make conjectures in a fictional way like Gibson did, Kurzweil examines patters of past events that could point to this future being a reality, including the prediction of a computer beating a human in chess, an idea that, at the time, nobody believed.


Christopher Ariza. "The Interrogator as Critic: The Turing Test and the Evaluation of Generative Music Systems." Computer Music Journal 33.2 (2009): 48-70. Project MUSE. Web. 22 Jan. 2011. <http://muse.jhu.edu/>.

The articile gives a history of the Turing test and how it is being applied. It also gives informatino about how artificial intelligences have actually been creating music and have been competing with human musicians. This is interesting as music, much like art, has always been viewed as something that only a human could do. How true really is this statement?


Carlos M. Fernandes. "Pherographia: Drawing by Ants." Leonardo 43.2 (2010): 107-112. Project MUSE. Web. 22 Jan. 2011. <http://muse.jhu.edu/>.

Much like the above article, this article shows how artificial intelligences today have indeed managed to create art. This shows the ever blurring lines between arts and sciences, and shows how our future may have a merging of the two, just like the mind and the machine.

Cook, Stephen. "The P versus NP Problem." The Clay Mathematics Institute. Web. 16 Mar. 2011.
<http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf>.

Provides a very thorough and official explanation of the P vs. NP problem. This will be my main artificly when referencing this problem, as it not only provides technical terms but also provides examples which will be friendly to the reader. It also touches on the implementations and ramifications that this problem has upon artificial intelligence as a whole.Horvitz, Eric, and Bart Selman.

 "AAAI Presidential Panel on Long-Term AI Futures." Web. 16 Mar. 2011. <
http://research.microsoft.com/en-s/um/people/horvitz/note_from_AAAI_panel_chairs.pdf>.

This is a report given by lead AI researches on a Presidential panels about the possible dangers that AI could pose in the future. Instead of the utopian future that scientists are striving for today, this illustrates the harm that could come instead. This will help in painting the true future of AI.


"AITopics / BriefHistory." Association for the Advancement of Artificial Intelligence. Web. 16 Mar. 2011.

<
http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/BriefHistory>.

Provides a timeline of the advancements of AI through history, all the way back to 5th century B.C. This will show not only the advancements made in the technology, but having them ordered this way chronologically presents a clear picture of the rapid development of computing and the exponential technology power idea.

Tuesday, March 1, 2011

The Profile You Are Trying To View Is Private

Looking over the evolution of the internet, it is apparent that the amount of information that resides within its electronic confines has increased exponentially since its establishment.  This information doesn’t just pertain to new stories, scientific development, social view, or other general knowledge information.  It also encompasses the rise of the personal, private material that now resides within this global network.  Pictures of friends and recent adventures now reside in a virtual album on Facebook for the world to see rather than a physical one tucked away discreetly on a shelf.  Personal views about issues aren’t just shared between friends anymore, but rather with the masses due to the availability of free personal blogs.  Even embarrassing moments one would think to keep secret forever are anonymously, but openly shared on sites such as fmylife.com.
This idea that our private life should be shared openly and freely with others on the internet, a view that is becoming increasingly more popular with the boom of sites like Facebook and Twitter among the up and coming generations, is one that author Sven Birkets openly deplores.  This “Waning of the Private Self” (Into the Electronic Millennium) is the belief that we as a people “have been edging away from the opaqueness of private life and towards the transparence of a life lived within a set of [electronic] systems…The figure-ground model, which figures a solitary self before a background that is the society of other selves, is romantic in the extreme” (Into the Electron Millennium).  His belief that private notion should be kept…well…private would be only accentuated by looking at Facebook’s current user base.
This image of the private self isn’t just about sharing yourself, it’s about yourself being  influenced by the communication media that impels our thoughts, actions, and perceptions of the world around us.  How we view the occurring events that we cannot witness firsthand is directly influenced by the spin that the reporting agency puts on it.  Our views are directly influenced by the popular views that society and the media portray.  This is only amplified by the fact that the media is inescapable.  We are surrounded by the our communication infrastructure, whether it be TV, the internet, or radio.  Birkets says that because of this, “we are much more interested in becoming collectively linked selves than privately suffering selves” (Is Cyberspace Destroying Society).  In a way, I feel he’s right to an extent.  While people putting themselves out there on the internet is their own choice, the ideas deemed important by the media are indeed thrust upon us involuntarily.  For how can we truly come up with a unique opinion when we are constantly bombarded and shaped by those around us, and better yet, how can an opinion that differs from the societal norm survive under such pressure to conform?