Alicia Washington Mallory McKee Cody Nelson History of Artificial Intelligence Introduction Evidence of Artificial Intelligence folklore can be traced back to ancient Egypt, but with the development of the electronic computer in 1941, the technology finally became available to create machine intelligence. The term artificial intelligence was first coined in 1956, at the Dartmouth conference, and since then Artificial Intelligence has expanded because of the theories and principles developed by its dedicated researchers. Through its short modern history, advancement in the fields of AI have been slower than first estimated, progress continues to be made. From its birth 5 decades ago, there have been a variety of AI programs, and they have impacted other technological advancements. The Beginnings of A.I. Although the computer provided the technology necessary for AI, it was not until the early 1950's that the link between human intelligence and machines was really observed. Norbert Wiener was one of the first Americans to make observations on the principle of feedback theory. The most familiar example of feedback theory is the thermostat: It controls the temperature of an environment by gathering the actual temperature of the house, comparing it to the desired temperature, and responding by turning the heat up or down. What was so important about his research into feedback loops was that Wiener theorized that all intelligent behavior was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. This discovery influenced much of early development of AI. Alan Turing In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines with true intelligence. He noted that "intelligence" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teletype) that was indistinguishable from a conversation with a human being, then the machine could be called "intelligent." This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition. The Turing Test was the first serious proposal in the philosophy of artificial intelligence. Gaming in A.I History. In 1951, using the Ferranti Mark I machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a world champion. Game AI would continue to be used as a measure of progress in AI throughout its history. Allen Newell & Herbert Simon In late 1955, Newell and Simon developed The Logic Theorist, considered by many to be the first AI program. The program, representing each problem as a tree model, would attempt to solve it by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field. John McCarthy In 1956 John McCarthy regarded as the father of AI, organized a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. He invited them to Vermont for "The Dartmouth summer research project on artificial intelligence." From that point on, because of McCarthy, the field would be known as Artificial intelligence. Although not a huge success, the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research. Knowledge Expansion In the seven years after the conference, AI began to pick up momentum. Although the field was still undefined, ideas formed at the conference were re-examined, and built upon. Centers for AI research began forming at Carnegie Mellon and MIT, and new challenges were faced: further research was placed upon creating systems that could efficiently solve problems, by limiting the search, such as the Logic Theorist. And second, making systems that could learn by themselves. In 1957, the first version of a new program The General Problem Solver(GPS) was tested. The program developed by the same pair which developed the Logic Theorist. The GPS was an extension of Wiener's feedback principle, and was capable of solving a greater extent of common sense problems. Knowledge Expansion (Cont.) A couple of years after the GPS, IBM contracted a team to research artificial intelligence. While more programs were being produced, McCarthy was busy developing a major breakthrough in AI history. In 1958 McCarthy announced his new development; the LISP language, which is still used today and is the language of choice among most AI developers. From Lab to Life No longer was the computer technology just part of a select few researchers in laboratories. The personal computer made its debut along with many technological magazines. Such foundations as the American Association for Artificial Intelligence also started. There was also, with the demand for AI development, a push for researchers to join private companies. Other fields of AI also made there way into the marketplace during the 1980's. One in particular was the machine vision field. The work by Minsky and Marr were now the foundation for the cameras and computers on assembly lines, performing quality control. Although crude, these systems could distinguish differences shapes in objects using black and white differences. By 1985 over a hundred companies offered machine vision systems in the US, and sales totaled $80 million. From Lab to Life (Cont.) The 1980's were not totally good for the AI industry. In 1986-87 the demand in AI systems decreased, and the industry lost almost a half of a billion dollars. Companies such as Teknowledge and Intellicorp together lost more than $6 million, about a third of there total earnings. Another disappointment was the so called "smart truck" financed by the Defense Advanced Research Projects Agency. The projects goal was to develop a robot that could perform many battlefield tasks. In 1989, due to project setbacks and unlikely success, the Pentagon cut funding for the project. Despite these discouraging events, AI slowly recovered. New technology in Japan was being developed. Fuzzy logic, first pioneered in the US has the unique ability to make decisions under uncertain conditions. Also neural networks were being reconsidered as possible ways of achieving Artificial Intelligence. The 1980's introduced to its place in the corporate marketplace, and showed the technology had real life uses, ensuring it would be a key in the 21st century. A.I. Put to Test The military put AI based hardware to the test of war during Desert Storm. AI-based technologies were used in missile systems, heads-up-displays, and other advancements. AI has also made the transition to the home. With the popularity of the AI computer growing, the interest of the public has also grown. Applications for the Apple Macintosh and IBM compatible computer, such as voice and character recognition have become available. Also AI technology has made steadying camcorders simple using fuzzy logic. With a greater demand for AI-related technology, new advancements are becoming available. Inevitably Artificial Intelligence has, and will continue to affecting our lives. A.I. Timeline Cartoon A.I. AI and Hollywood Movies • In today’s generation, Hollywood movies are mostly about androids, humanoids, and robots. • Machines going out of control • Replacing humanity • World domination Smart Car Speech recognition Up-to-date information about historical landmarks and points of interest on the car’s rout. Lowest price gas stations close to the current position of the car. Warns drivers of road hazards • Smart Car Robotics • Robotics continue to evolve from manufacturing, medicine and remote exploration to entertainment, security and personal assistance. • Robots may be one of the most well known examples of Artificial Intelligence. • Japan has announced that they will send the first humanoid robots to the moon. http://smart-machines.blogspot.com/ Robotics example A.I. In Military The U.S is spending as much 100 billion dollars to develop robots that can aid or replace human soldiers on the front line. These robots can operate in combat zones with little supervision. Flight simulations and virtual environments help train over 500,000 Soldiers. http://www.aaai.org/aitopics/pmwiki/pmwiki. php/AITopics/Military What’s New in A.I. Honda has created a helmet-like device that can read human brain waves and transmit them to humanoid robot. A person can make the robot perform simple tasks, including moving Its arm. Prototypes of a car with sensors and small motors to navigate a trafficladen city street with no driver have been created. http://newsfeedresearcher.com/data/article s_t14/honda-robot-brain.html#hdng0 Today’s A.I. While military uses have tended to dominate commercial development of autonomous robots in America, business opportunities for smart robots are also sizable, according to experts Japan’s research into intelligent robotics has been oriented toward helping the nation’s rapidly aging population perform domestic tasks. A.I. In Video Games • Video game artificial intelligence is a programming area that tries to make the computer act in a similar way to human intelligence. • A rule based system is used whereby information and rules are entered into a database, and when the video game AI is faced with a situation, it finds appropriate information and acts accordingly. http://www.adigitaldreamer.com/articles/arti ficial-intelligence-video-games.htm A.I. in Video Games Continued In 2001 the game Halo featured A.I. that could use vehicles and team tactics. The AI could recognize threats such as grenades and incoming vehicles. In 2008 the Game Left 4 Dead featured a new type of AI in gaming called The Director. Instead of having a difficulty level which just ramps up to a constant level, the A.I. analyze how the players fared in the game so far, and try to add subsequent events that would give them a sense of narrative