Artificial Intelligence Paper

advertisement
Joy Fulcher
Information Retrieval: Artificial Intelligence
With the advancements in technology that occur daily, science fiction scenarios no longer
seem to be impossible. Ray guns, light sabers, and teleportation devices could eventually be part
of daily life, instead of just being an idea. Artificial intelligence, or AI, is a technology that has
been greatly advanced over the years, and is now no longer just an idea in books or films.
Artificial intelligence is defined by John McCarthy, a member of the computer science
department at Stanford and the father of AI, as “the science and engineering of making
intelligent machines, especially intelligent computer programs” (McCarthy 2). Another way of
explaining it is that AI is a system or machine that exists to perform tasks that would normally
require human intelligence. McCarthy explains that intelligence is the “computational part of the
ability to achieve goals in the world”; this definition helps to define the scope of AI, as its modus
operandi is achievement.
Although stories of artificial intelligence date back to ancient Egypt, it was the
development of the electronic computer in 1941 that finally brought about the technology that
could make AI possible. The link between human intelligence and machines was observed in the
early 1950’s by Norbert Wiener. He observed that intelligent behavior was the result of feedback
mechanisms. An example of a feedback mechanism is a thermostat: the temperature of an
environment is controlled by gathering the actual temperature, comparing that to the desired
temperature, and responding by adjusting the system accordingly. In 1955, Newell and Simon
developed The Logic Theorist, which is often considered to be the first AI program. John
McCarthy organized a conference to attract those interested in “machine intelligence”; it was at
the conference that the phrase “artificial intelligence” was created. In 1963, MIT received a 2.2
million dollar grant from the Department of Defense to increase the pace of development of AI
to ensure that the US would stay ahead of the Soviet Union in its technology research. In 1970,
the expert system was created. Expert systems predict the probability of a solution under set
conditions. This creation allowed AI systems to store rules and additional information. During
Desert Storm, the military used artificial intelligence systems for missiles and various displays.
AI has now become popular in a home setting, with advancements including voice and character
recognition.
In an article for the Stanford Formal, John McCarthy answers a series of questions about
artificial intelligence, beginning with its definition. McCarthy claims that although the idea of
artificial intelligence is to create intelligent machines, AI is typically not about simulating or
recreating human intelligence. The article also relates that although the computer programs are
intelligent, they do not have IQs, as IQ is based upon development, and a computer that could
score well on an IQ test would not be useful. Despite having a great deal of memory and speed,
an artificially intelligent computer is only as intelligent as the intellectual mechanisms that its
designers understands. The computer may have abilities that do not develop until teen years but
do not have certain abilities possessed by toddlers. McCarthy states that this issue is “further
complicated by the fact that the cognitive sciences still have not succeeded in determining
exactly what the human abilities are” (McCarthy 2). Basically, a designer’s understanding of
intellectual function and mechanisms is apparent in its artificially intelligent product. McCarthy
goes on to explain the Turing test. Turing believed this test was an accurate condition for
determining the intelligence of a computer; a computer and a person interact over teletype and
each attempts to trick the other into thinking it is a human and a computer, respectively. The
idea is that if a computer can successfully imitate a human, it can be considered intelligent.
McCarthy believes that although a machine that would be able to pass the test would certainly be
considered intelligent, a machine could still be considered intelligent without imitating human
behavior. Currently, human-level intelligence in machines is unobtainable until new ideas are
achieved, according to McCarthy. An alternative to creating machines with human-level
intelligence is the “child machine” that improves itself with its experiences, but McCarthy claims
that artificial intelligence is not currently able to learn the way that children learn through
physical experiences. Current machines also do not understand language well enough to learn by
reading. McCarthy addresses a question of morals in relation to artificial intelligence, and claims
that some believe it to be impossible, some believe it to be immoral, and some are disappointed
that it has not been fully achieved yet.
In “Intelligent Agents: It’s Nice to Get Stuff Done for You,” Ernest Perez describes
“intelligent agents” that utilize artificially intelligent technologies. The goal of these agents is to
“harness the power and potential of the desktop computer and then to leverage or magnify it even
more, by combining it with the enormous Internet distributed information file” (Perez 52).
Basically, these agents should be “practical and valuable” (52). Perez claims that these agents
are described using these phrases: “smart agents, search agents, Web robots, bots, avatars,
intelligent assistants, smart applications, customer service robots, shopbots, … and intelligent
tools” (51). Some intelligent agent websites are BotSpot.com, bottechnology.com,
agentland.com, and there are a great deal more that are available. The purpose of the article is to
clear up any confusion related to these intelligent agents by explaining that they are real and no
longer just science fiction myths. Perez believes that the agents are “the 21st century magic
solution to information retrieval problems” (51). Several definitions of intelligent agents are
given; first, an intelligent agent “uses autonomous, flexible, goal-defined behavior based on a
designed action and rule set” (53). Second, an intelligent agent is “social, in that it can interact
and communicate with humans, and perhaps with other agents” (53). Finally, an intelligent
agent “takes a proactive rather than reactive action to achieve a goal” (53). According to the
article, these intelligent agents serve to make the Web a more interactive place, which will, in
turn, serve to make it a more inviting environment for users. Agents can also perform
simultaneous computing tasks to synthesize information from different networks. A friendlier
Web, or Semantic Web, is one such technology that serves to communicate with users and to
make the web seem more easily accessible for users. Semantic Web agents “roam from page to
page – mining data, text, and ideas and performing sophisticated tasks at the bidding of their
human masters” (51-52). These intelligent agents travel across networks and cause interactions
between the users and networks, which “harness[es] the latent power of distributed network
intelligence;” Perez explains that, although this concept may sound fictional, it is currently
testable. The next step for intelligent agents, according to Perez, is to control external devices.
These external devices can be microprocessors, operating controls, motors, or thermostats.
Currently, simple external control operations are in use to manage inventories, operate ‘smart
homes’, and remotely operate television cameras through the Web. The agents are able to
perform operations that would essentially be “B-O-R-I-N-G” or repetitive, complex, difficult,
and time consuming for humans (52). Intelligent agents’ value is that they deliver an immediate
payoff by removing an immense workload from humans, Perez claims. Some various types of
intelligent agents are search agents and specialized agents. Search agents “the most familiar to
information professionals,” according to Perez. Included in this category are the Web spider
engines such as Google and AltaVista. Web search agents range from basic Web spiders to Web
metacrawlers and desktop metacrawlers. The specialized agents are broken down into
categories: monitors, specialized search bots, knowledge management, and ChatterBots. These
all perform various tasks within information retrieval systems.
“A Retrospective and Prospective View of Information Retrieval and Artificial
Intelligence in the 21st Century” by Eugene Garfield begins by giving changes in information
retrieval in the years since the nineteen fifties. Garfield claims that scientists previously
subscribed to several journals and visited libraries for any other journals they may have needed.
Reprints were often traded, and paper correspondence, printed indexes, and abstracting services
were the norm. In the nineteen seventies, online access to indexing and abstracting services
became available, and twenty years later, full-text journal articles became available online.
Instant access to these journals is achieved through artificially intelligent services that link users
to their desired content. With the availability of archives, users are able to search not only recent
documents, but old ones as well. Garfield claims that, in five years, scientists will have complete
online access to the previous ten years of journal articles, and in ten years, most of the journals
from the twentieth century will be available online, especially the thousand most consulted ones.
Projects such as JSTOR provide a model for this; however, Garfield cites Dana Roth as
“rejecting the JSTOR model”; Roth suggests that files of the most cited articles be uploaded
instead (Garfield 18). The journals with the highest impact factor are available online, but
complete online archives are extremely rare, according to Garfield. In a reference to one of his
previous works, Garfield mentions an “information nirvana” which is a “metaphor for the World
Brain of H.G. Wells and the dreams of early encylopedists (20). With each generation of
advancement in the fields of information technology, new refinements are needed. Garfield
suggests that artificially intelligent machines would be beneficial in the process of translating,
scanning, and uploading articles to the online databases, but he does claim that this is not
currently possible. Garfield also suggests that although the automation will likely not replace
human intelligence in the future, artificial intelligence technologies will be implemented in
indexing-poor fields such as evaluative medicine and bioinformatics. Finally, Garfield believes
that information retrieval is focused on gaining easier access to primary literature and that, in the
future, there will be artificially intelligent machines to make this easier. For now, however,
humans must continue to perform their own reviews.
In a study in Australia, Richard Leibbrandt, et.al., attempted to ascertain if artificial
intelligence tools could assist with discovering, evaluating, and tagging digital resources.
According to the authors, “the quantity and nature of digital content means that manual
identification and cataloguing of appropriate resources is time-consuming and can result in
inconsistent metadata which is not readily shared between systems” (Leibbrandt 1). The main
research question is stated as : “which steps in the metadata creation workflow, that is which of
the decisions made by human metadata creators, could artificial intelligence systems be trained
to implement?” (3). The data collection cycle was meant to be optimized by “augmenting
collection processes with automated ones” or even replacing human processes where necessary
(3). The potential benefits, if accuracy is achieved, are “improved efficiency”, “improved user
experience”, and “improved integration” (3). Both of these benefits depend upon the automated
system’s ability to imitate human metadata creators’ actions; these actions include providing
classification suggestions, deliver results through the use of thesauri, and mapping user tags.
This article provides yet another definition of artificial intelligence; it claims that “Artificial
Intelligence involves the use of computer programs to solve problems for which there is no
simple and straightforward process of arriving at a solution” (3). Also mentioned is the use of
text classification, which is commonly used with artificial intelligence, and “thematically or
systematically disambiguates natural language texts using one or multiple topical tags” (3). Text
classification is arranged hierarchically by elements such as positive, neutral, and negative. An
important task of text classification is to extract a small set of key words and phrases to indicate
the topic of the text, and classification of the text is then based upon the keywords rather than the
text itself. One of the challenges in creating an artificially intelligent cataloguing system is to
delineate the heuristic process through which experts determine metadata and category terms. In
the study, the authors were interested in using key phrases and other metadata from full-text
mapping in order to extract subject keywords from a controlled vocabulary. The tasks were
considered to be classification tasks, but two problems were encountered: a very large amount of
historical data was required, and the system can only recognize words and phrases that have
previously appeared in its resources. Although the study underwent three experiments (key
phrase extraction, subject term prediction, and keyword mapping), the results only showed small
successes. It has been concluded that “automated classification based on artificial intelligence is
useful as a means of supplementing and assisting human classification, but is not at this stage a
replacement for human classification of educational resources” (1).
A study of artificially intelligent tutors and medical students was performed in order to
ascertain the students’ learning abilities when assisted by the tutors before exams. CIRCSIMTutor is a tutor designed to teach medical students through natural language, and its behaviors
are modeled after the behaviors exhibited by two human tutors. More specifically, the tutor
focuses on one specific area of study: the baroreceptor reflex. To begin the study, the students
made predictions about the system’s responses. The article provides transcripts of some
conversations held by the students and tutor, in which the tutor asks the students questions
relating to the baroreceptor reflex, and the students answer the questions. If a question is
incorrectly answered, CIRCSIM-Tutor prompts the students with a statement that will assist in
answering correctly. In order for CIRCSIM-Tutor to work, the system includes an Expert
Problem Solver and Domain Knowledge base, a Student Modeler, and a Pedagogical Module.
The interface of the tutor has been split into three sections because of its need to carry on a
dialogue in natural language: Input Understander, Text Generator, and Screen Manager.
Because the tutor must react to all of the students’ responses, it is able to understand “partially
correct answers, near-misses, ‘grain of truth’ answers, and answers revealing some kinds of
misconceptions” (Michael 247). The system is even equipped to respond to answers such as “I
don’t know”. Also, the system was able to correct spelling errors made by the students and still
explain the answers needed. The test results showed that the students who used the tutor were
able to better understand the baroreceptor reflex on which they were tested. Although
CIRCSIM-Tutor is not as sophisticated as a human tutor, it is able to predict the learning
outcomes of the students with which it has interacted.
In “The application of intelligent agents in libraries: a survey”, Guoying Liu provides a
literature review on the “utilization of intelligent agent technology in a library environment” (Liu
78). Liu’s statement, “With the dramatic increase of available materials and user expectations,
libraries are forced to exploit new technology to fulfill their missions with relatively limited
resources” sets up her claim that intelligent agents have potential to be of assistance in libraries
(78). Liu defines intelligent agents in relation to computer science as “ software systems that are
able to take actions towards their goals without human intervention” (78), but she also claims
that no single definition of agents has been adopted. She claims that agents should be
autonomous and cites Wooldridge and Jennings as saying that other attributes of agents can
include “social ability, reactivity, pro-activeness, rationality, and mobility” (80). Also cited are
Russell and Norvig’s claims that intelligent agents are divided into four groups: reflex agents,
goal-based agents, utility-based agents, and learning agents. According to Liu, “information
professionals play a key role in the agent-based system development and implementation” of
information agents. A paper reviewed by Liu discusses potential use of agents for various library
tasks that include development of collections, classification, indexing, abstracting, and
circulation and reference services. A major focus of Liu’s literature review includes intelligent
agents in digital libraries; she outlines agent based digital library projects, multi-agent
architecture for digital libraries, and intelligent agent information retrieval in digital libraries.
Also discussed is web-based information retrieval and mobile agents for remote information
retrieval and processing.
One step toward the future for artificial intelligence is that of ambient intelligence.
Ambient intelligence can be considered to be “a digital environment that proactively, but
sensibly, supports people in their daily lives” (Ramos 15). Ambient intelligence, once
implemented, can interpret the state of the user’s environment, represent the information and
knowledge associated with the environment, model, simulate, and represent entities in the
environment, plan actions, learn about the environment, interact with humans, and act upon its
environment. Artificial intelligence is imperative for ambient intelligence to be achieved,
creating a new challenge for those working in the field of artificial intelligence.
Artificial intelligence technologies clearly have a great deal of improvements that need to
be reached before they can be utilized to their full potential; but there are several possibilities of
using these technologies in an information environment once the goals are realized. Currently,
the technologies are inadequate, but their advancement will be extremely useful for information
professionals. For example, once the artificial intelligence technologies have been designed that
can perform classification and cataloguing tasks, libraries will be able to allocate their resources
to other disciplines upon which they need to focus, as the cataloguing and classification process
is very time consuming and frequently inefficient. Intelligent tutors that can use natural
language in dialogue can be implemented to assist library users with their educational needs. A
future of ambient intelligence could also potentially assist users with reference searches in
libraries, if the systems are able to interpret a specific user’s informational needs. The potential
for artificially intelligent robots also exists in libraries, as there are several libraries that currently
use a robot system for fast and easy information retrieval. Although it is not cost effective for all
libraries, but as the technologies advance, it is entirely possible that artificial intelligence will be
more easily accessible. Of course, there is always the question of artificial intelligence behaving
as it does in science fiction (i.e. I, Robot and Terminator), but with the current technologies, an
artificially intelligent take over is not plausible. Scientists and programmers should certainly
continue their work, and hopefully it will lead to more useful discoveries in the field of
information retrieval.
Bibliography
Garfield, Eugene. "A retrospective and prospective view of information retrieval and artificial
intelligence in the 21st century." JASIST 52.1 (2001): 18-21.
Leibbrandt, Richard, et al. "Smart Collections: Can Artificial Intelligence Tools and Techniques
Assist with Discovering, Evaluating and Tagging Digital Learning
Resources?." International Association of School Librarianship (2010).
Liu, Guoying. "The application of intelligent agents in libraries: a survey."Program: electronic
library and information systems 45.1 (2011): 78-97.
McCarthy, John. "What is artificial intelligence." URL: http://www-formal. stanford.
edu/jmc/whatisai. html (2007).
Michael, Joel, et al. "Learning from a computer tutor with natural language
capabilities." Interactive Learning Environments 11.3 (2003): 233-262.
Perez, Ernest. "Intelligent Agents: It's Nice To Get Stuff Done for You." Online26.3 (2002): 5156.
Ramos, Carlos, Juan Carlos Augusto, and Daniel Shapiro. "Ambient intelligence—the next step
for artificial intelligence." Intelligent Systems, IEEE23.2 (2008): 15-18.
Download