Uploaded by ersuhailazam

AI Unit 5

advertisement
UNIT – 5
What is an Expert System?
An expert system is a computer program that is designed to solve complex problems and to
provide decision-making ability like a human expert. It performs this by extracting
knowledge from its knowledge base using the reasoning and inference rules according to the
user queries.
The expert system is a part of AI, and the first ES was developed in the year 1970, which
was the first successful approach of artificial intelligence. It solves the most complex issue
as an expert by extracting the knowledge stored in its knowledge base. The system helps in
decision making for complex problems using both facts and heuristics like a human
expert. It is called so because it contains the expert knowledge of a specific domain and
can solve any complex problem of that particular domain. These systems are designed for a
specific domain, such as medicine, science, etc.
The performance of an expert system is based on the expert's knowledge stored in its
knowledge base. The more knowledge stored in the KB, the more that system improves its
performance. One of the common examples of an ES is a suggestion of spelling errors while
typing in the Google search box.
Below is the block diagram that represents the working of an expert system:
Note: It is important to remember that an expert system is not used to replace the human
experts; instead, it is used to assist the human in making a complex decision. These
systems do not have human capabilities of thinking and work on the basis of the knowledge
base of the particular domain.
Characteristics of Expert System
o
High Performance: The expert system provides high performance for solving any
type of complex problem of a specific domain with high efficiency and accuracy.
o
Understandable: It responds in a way that can be easily understandable by the
user. It can take input in human language and provides the output in the same way.
o
Reliable: It is much reliable for generating an efficient and accurate output.
o
Highly responsive: ES provides the result for any complex query within a very
short period of time.
Components of Expert System
An expert system mainly consists of three components:
o
User Interface
o
Inference Engine
o
Knowledge Base
1. User Interface
With the help of a user interface, the expert system interacts with the user, takes queries as
an input in a readable format, and passes it to the inference engine. After getting the
response from the inference engine, it displays the output to the user. In other words, it is
an interface that helps a non-expert user to communicate with the expert system
to find a solution.
2. Inference Engine(Rules of Engine)
o
The inference engine is known as the brain of the expert system as it is the main
processing unit of the system. It applies inference rules to the knowledge base to
derive a conclusion or deduce new information. It helps in deriving an error-free
solution of queries asked by the user.
o
With the help of an inference engine, the system extracts the knowledge from the
knowledge base.
o
There are two types of inference engine:
o
Deterministic Inference engine: The conclusions drawn from this type of
inference engine are assumed to be true. It is based on facts and rules.
o
Probabilistic Inference engine: This type of inference engine contains uncertainty
in conclusions, and based on the probability.
Inference engine uses the below modes to derive the solutions:
o
Forward Chaining: It starts from the known facts and rules, and applies the
inference rules to add their conclusion to the known facts.
o
Backward Chaining: It is a backward reasoning method that starts from the goal
and works backward to prove the known facts.
3. Knowledge Base
o
The knowledgebase is a type of storage that stores knowledge acquired from the
different experts of the particular domain. It is considered as big storage of
knowledge. The more the knowledge base, the more precise will be the Expert
System.
o
It is similar to a database that contains information and rules of a particular domain
or subject.
o
One can also view the knowledge base as collections of objects and their attributes.
Such as a Lion is an object and its attributes are it is a mammal, it is not a domestic
animal, etc.
Components of Knowledge Base
o
Factual Knowledge: The knowledge which is based on facts and accepted by
knowledge engineers comes under factual knowledge.
o
Heuristic Knowledge: This knowledge is based on practice, the ability to guess,
evaluation, and experiences.
Knowledge Representation: It is used to formalize the knowledge stored in the
knowledge base using the If-else rules.
Knowledge Acquisitions: It is the process of extracting, organizing, and structuring the
domain knowledge, specifying the rules to acquire the knowledge from various experts, and
store that knowledge into the knowledge base.
Capabilities of the Expert System
Below are some capabilities of an Expert System:
o
Advising: It is capable of advising the human being for the query of any domain
from the particular ES.
o
Provide decision-making capabilities: It provides the capability of decision
making in any domain, such as for making any financial decision, decisions in
medical science, etc.
o
Demonstrate a device: It is capable of demonstrating any new products such as its
features, specifications, how to use that product, etc.
o
Problem-solving: It has problem-solving capabilities.
o
Explaining a problem: It is also capable of providing a detailed description of an
input problem.
o
Interpreting the input: It is capable of interpreting the input given by the user.
o
Predicting results: It can be used for the prediction of a result.
o
Diagnosis: An ES designed for the medical field is capable of diagnosing a disease
without using multiple components as it already contains various inbuilt medical
tools.
Advantages of Expert System
o
These systems are highly reproducible.
o
They can be used for risky places where the human presence is not safe.
o
Error possibilities are less if the KB contains correct knowledge.
o
The performance of these systems remains steady as it is not affected by emotions,
tension, or fatigue.
o
They provide a very high speed to respond to a particular query.
Limitations of Expert System
o
The response of the expert system may get wrong if the knowledge base contains
the wrong information.
o
Like a human being, it cannot produce a creative output for different scenarios.
o
Its maintenance and development costs are very high.
o
Knowledge acquisition for designing is much difficult.
o
For each domain, we require a specific ES, which is one of the big limitations.
o
It cannot learn from itself and hence requires manual updates.
Applications of Expert System
o
In designing and manufacturing domain
It can be broadly used for designing and manufacturing physical devices such as
camera lenses and automobiles.
o
In the knowledge domain
These systems are primarily used for publishing the relevant knowledge to the users.
The two popular ES used for this domain is an advisor and a tax advisor.
o
In the finance domain
In the finance industries, it is used to detect any type of possible fraud, suspicious
activity, and advise bankers that if they should provide loans for business or not.
o
In the diagnosis and troubleshooting of devices
In medical diagnosis, the ES system is used, and it was the first area where these
systems were used.
o
Planning and Scheduling
The expert systems can also be used for planning and scheduling some particular
tasks for achieving the goal of that task.
RULE BASED ARCHITECTURE OF AN EXPERT SYSTEM
The most common form of architecture used in expert and other types of
knowledge based systems is the production system or it is called rule based
systems. This type of system uses knowledge encoded in the form of production
rules i.e. if-then rules. The rule has a conditional part on the left hand side and a
conclusion or action part on the right hand side. For example if: condition1 and
condition2 and condition3
Then: Take action4
Each rule represents a small chunk of knowledge to the given domain of expertise.
When the known facts support the conditions in the rule’s left side, the conclusion
or action part of the rule is then accepted as known. The rule based architecture of
an expert system consists of the domain expert, knowledge engineer, inference
engine, working memory, knowledge base, external interfaces, user interface,
explanation module, database spreadsheets executable programs s mentioned in
figure.
Integration of Expert systems Components
The components of the rule based architecture are as follows.
1. User Interface: It is the mechanism by which the user and the expert system
communicate with each other i.e. the use interacts with the system through a user
interface. It acts as a bridge between user and expert system. This module accepts
the user queries and submits those to the expert system. The user normally
consults the expert system for following reasons.
a) To get answer of his/her queries.
b) To get explanation about the solution for psychological satisfaction.
The user interface module is designed in such a way that at user level it accepts
the query in a language understandable by expert system. To make the expert
system user friendly, the user interface interacts with the user in natural language.
The user interface provides as much facilities as possible such as menus,
graphical interfaces etc. to make the dialog user friendly and more attractive.
2.Explanation Module: The explanation module explains the reasoning of the
system to a user. It provides the user with an explanation of the reasoning process
when requested. The credibility of expert system will be established only when
it is able to explain “how and why” a particular conclusion is drawn. This
explanation increases the belief of user in the expert system.
a) Explanation(How): To respond to a how query, the explanation module
traces the chain of rules fired during a consolation with the user. This
explanation mode can be activated once the process is over. It explains how a
fact was deduced by the system and similarly how a rule was/wasn’t used. The
simplest way to specify this is to explain the rule which allows the deduction.
For e.g.
If the system (S) will give information about the parent-child relationship to
the user (U) then the followings can be possible.
S: My diagnosis is “A is the father of B”
U: How?
S: The result was obtained by the addition of following facts and rules.
Fact no 11: A is the parent of Hari.
Fact no 15: A is a male.
Fact no 110: X is father of Y:
X is parent of Y, X is male.
So A is the father of B.
b) Explanation (Why)? To respond to a why query, the explanation module
must be able to explain why certain information is needed by the inference
engine to complete a step in the reasoning process. This mode of explanation
is dynamic and is used when the system functions in backward chaining. If the
user doesn’t understand the reason behind the question. The system is capable
of
explaining why the question was asked. For example S: Is the following true?
A is the father of B.
U:Why?
S:I need the fact:
A is the father of B to establish the following fact “B is the son
of A”.
By using the rule no. 4:
A is the father of B:
B is the son of A.
3. Working Memory: It is a global database of facts used by the rules.
Knowledge Engineering: The primary people involved in building an expert
system are the knowledge engineer, the domain expert and the end user. Once the
knowledge engineer has obtained a general overview of the problem domain and
gone through several problem solving sessions with the domain expert, he/she is
ready to begin actually designing the system, selecting a way to represent the
knowledge, determining the search strategy (backward or forward) and designing
the user interface. After making complete designs, the knowledge engineer builds
a prototype. The prototype should be able to solve problems in a small area of
the domain. Once the prototype has been implemented, the knowledge engineer
and domain expert test and refine its knowledge by giving it problems to solve
and correcting its disadvantages.
5. Knowledge Base: In rule based architecture of an expert system, the knowledge
base is the set of production rules. The expertise concerning the problem area is
represented by productions. In rule based architecture, the condition actions pairs
are represented as rules, with the premises of the rules (if part) corresponding to
the condition and the conclusion (then part) corresponding to the action. Casespecific data are kept in the working memory. The core part of an expert system
is the knowledge base and for this reason an expert system is also called a
knowledge based system. Expert system knowledge is usually structured in the
form of a tree that consists of a root frame and a number of sub frames. A simple
knowledge base can have only one frame, i.e. the root frame whereas a large and
complex knowledge base may be structured on the basis of multiple frames.
Inference Engine: The inference engine accepts user input queries and responses to
questions through the I/O interface. It uses the dynamic information together with
the static knowledge stored in the knowledge base. The knowledge in the knowledge
base is used to derive conclusions about the current case as presented by the user’s
input. Inference engine is the module which finds an answer from the knowledge
base. It applies the knowledge to find the solution of the problem. In general,
inference engine makes inferences by deciding which rules are satisfied by facts,
decides the priorities of the satisfied rules and executes the rule with the highest
priority. Generally inferring process is carried out recursively in 3 stages like match,
select and execute. During the match stage, the contents of working memory are
compared to facts and rules contained in the knowledge base. When proper and
consistent matches are found, the corresponding rules are placed in a conflict set.
What is NLP?
NLP stands for Natural Language Processing, which is a part of Computer Science,
Human language, and Artificial Intelligence. It is the technology that is used by machines
to understand, analyse, manipulate, and interpret human's languages. It helps developers to
organize knowledge for performing tasks such as translation, automatic summarization,
Named
Entity
Recognition
(NER),
speech
recognition,
relationship
extraction, and topic segmentation.
Advantages of NLP
o
NLP helps users to ask questions about any subject and get a direct response within
seconds.
o
NLP offers exact answers to the question means it does not offer unnecessary and
unwanted information.
o
NLP helps computers to communicate with humans in their languages.
o
It is very time efficient.
o
Most of the companies use NLP to improve the efficiency of documentation
processes, accuracy of documentation, and identify the information from large
databases.
Disadvantages of NLP
A list of disadvantages of NLP is given below:
o
NLP may not show context.
o
NLP is unpredictable
o
NLP may require more keystrokes.
o
NLP is unable to adapt to the new domain, and it has a limited function that's why
NLP is built for a single and specific task only.
Components of NLP
There are the following two components of NLP -
1. Natural Language Understanding (NLU)
Natural Language Understanding (NLU) helps the machine to understand and analyse
human language by extracting the metadata from content such as concepts, entities,
keywords, emotion, relations, and semantic roles.
NLU mainly used in Business applications to understand the customer's problem in both
spoken and written language.
NLU involves the following tasks o
It is used to map the given input into useful representation.
o
It is used to analyze different aspects of the language.
2. Natural Language Generation (NLG)
Natural Language Generation (NLG) acts as a translator that converts the computerized
data into natural language representation. It mainly involves Text planning, Sentence
planning, and Text Realization.
Difference between NLU and NLG
NLU
NLG
NLU is the process of reading and
NLG is the process of writing or generating
interpreting language.
language.
It produces non-linguistic outputs
It produces constructing natural language
from natural language inputs.
outputs from non-linguistic inputs.
Applications of NLP
There are the following applications of NLP 1. Question Answering
Question Answering focuses on building systems that automatically answer the questions
asked by humans in a natural language.
2. Spam Detection
Spam detection is used to detect unwanted e-mails getting to a user's inbox.
3. Sentiment Analysis
Sentiment Analysis is also known as opinion mining. It is used on the web to analyse the
attitude, behaviour, and emotional state of the sender. This application is implemented
through a combination of NLP (Natural Language Processing) and statistics by assigning the
values to the text (positive, negative, or natural), identify the mood of the context (happy,
sad, angry, etc.)
4. Machine Translation
Machine translation is used to translate text or speech from one natural language to another
natural language.
Example: Google Translator
5. Spelling correction
Microsoft Corporation provides word processor software like MS-word, PowerPoint for the
spelling correction.
6. Speech Recognition
Speech recognition is used for converting spoken words into text. It is used in applications,
such as mobile, home automation, video recovery, dictating to Microsoft Word, voice
biometrics, voice user interface, and so on.
7. Chatbot
Implementing the Chatbot is one of the important applications of NLP. It is used by many
companies to provide the customer's chat services.
8. Information extraction
Information extraction is one of the most important applications of NLP. It is used for
extracting structured information from unstructured or semi-structured machine-readable
documents.
9. Natural Language Understanding (NLU)
It converts a large set of text into more formal representations such as first-order logic
structures that are easier for the computer programs to manipulate notations of the natural
language processing.
Phases of NLP
There are the following five phases of NLP:
1. Lexical Analysis and Morphological
The first phase of NLP is the Lexical Analysis. This phase scans the source code as a stream
of characters and converts it into meaningful lexemes. It divides the whole text into
paragraphs, sentences, and words.
2. Syntactic Analysis (Parsing)
Syntactic Analysis is used to check grammar, word arrangements, and shows the
relationship among the words.
Example: Agra goes to the Poonam
In the real world, Agra goes to the Poonam, does not make any sense, so this sentence is
rejected by the Syntactic analyzer.
3. Semantic Analysis
Semantic analysis is concerned with the meaning representation. It mainly focuses on the
literal meaning of words, phrases, and sentences.
4. Discourse Integration
Discourse Integration depends upon the sentences that proceeds it and also invokes the
meaning of the sentences that follow it.
5. Pragmatic Analysis
Pragmatic is the fifth and last phase of NLP. It helps you to discover the intended effect by
applying a set of rules that characterize cooperative dialogues.
For Example: "Open the door" is interpreted as a request instead of an order.
NLP Libraries
Scikit-learn: It provides a wide range of algorithms for building machine learning models in
Python.
Natural language Toolkit (NLTK): NLTK is a complete toolkit for all NLP techniques.
Pattern: It is a web mining module for NLP and machine learning.
TextBlob: It provides an easy interface to learn basic NLP tasks like sentiment analysis,
noun phrase extraction, or pos-tagging.
Quepy: Quepy is used to transform natural language questions into queries in a database
query language.
SpaCy: SpaCy is an open-source NLP library which is used for Data Extraction, Data
Analysis, Sentiment Analysis, and Text Summarization.
Gensim: Gensim works with large datasets and processes data streams.
KNOWLEDGE ACQUISITION
Knowledge acquisition is the gathering or collecting knowledge from various
sources. It is the process of adding new knowledge to a knowledge base and refining
or improving knowledge that was previously acquired. Acquisition is the process of
expanding the capabilities of a system or improving its performance at some
specified task. So it is the goal oriented creation and refinement of knowledge.
Acquired knowledge may consist of facts, rules, concepts, procedures, heuristics,
formulas, relationships, statistics or any other useful information. Source of these
knowledges may be experts in the domain of interest, text books, technical papers,
database reports, journals and the environments. The knowledge acquisition is a
continuous process and is spread over entire lifetime. Example of knowledge
acquisition is machine learning. It may be process of autonomous knowledge
creation or refinements through the use of computer programs. The newly acquired
knowledge should be integrated with existing knowledge in some meaningful way.
The knowledge should be accurate, non-redundant, consistent and fairly complete.
Knowledge acquisition supports the activities like entering the knowledge and
maintaining knowledge base. The knowledge acquisition process also sets dynamic
data structures for existing knowledge to refine the knowledge.
The role of knowledge engineer is also very important with respect to develop the
refinements of knowledge. Knowledge engineers may be the professionals who elicit
knowledge from experts. They integrate knowledge from various sources like
creates and edits code, operates the various interactive tools, build the knowledge
base etc.
Figure Knowledge Engineer’s Roles in Interactive Knowledge Acquisition
Knowledge Acquisition Techniques
Many techniques have been developed to deduce knowledge from an expert. They
are termed as knowledge acquisition techniques. They are:
a)
Diagram Based Techniques
b)
Matrix Based Techniques
c)
Hierarchy-Generation Techniques
d)
Protocol Analysis Techniques
e)
Protocol Generation Techniques
f)
Sorting Techniques
In diagram based techniques the generation and use of concept maps, event diagrams
and process maps. This technique captures the features like “why, whe n, who, how
and where”. The matrix based techniques involve the construction of grids indicating
such things as problems encountered against possible solutions. Hierarchical
techniques are used to build hierarchical structures like trees. Protocol analysis
technique is used to identify the type of knowledge like goals, decisions,
relationships etc. The protocol generation techniques include various types of
interviews like structured, semi-structured and unstructured.
The most common knowledge acquisition technique is face-to-face interview.
Interview is a very important technique which must be planned carefully. The results
of an interview must be verified and validated. Some common variations of an
unstructured interview are talk through, teach through and read through. The
knowledge engineer slowly learns about the problem. Then can build a
representation of the knowledge. In unstructured interviews, seldom provides
complete or well-organized descriptions of cognitive processes because the domains
are generally complex. The experts usually find it very difficult to express some
more important knowledge. Data acquired are often unrelated, exists at varying
levels of complexity, and are difficult for the knowledge engineer to review, interpret
and integrate. But on the other hand structured interviews are systematic goal
oriented process. It forces an organized communication between the knowledge
engineer and the expert. In structured interview, inter personal communication and
analytical skills are important.
AI application to robotics
Artificial intelligence (AI) and robotics are a powerful combination for automating tasks inside and
outside of the factory setting. In recent years, AI has become an increasingly common presence in
robotic solutions, introducing flexibility and learning capabilities in previously rigid applications.
4 Robotic Applications that Use Artificial Intelligence
1. Assembly
AI is a highly useful tool in robotic assembly applications. When combined with advanced vision
systems, AI can help with real-time course correction, which is particularly useful in complex
manufacturing sectors like aerospace. AI can also be used to help a robot learn on its own which
paths are best for certain processes while it’s in operation.
2. Packaging
Robotic packaging uses forms of AI frequently for quicker, lower cost and more accurate packaging.
AI helps save certain motions a robotic system makes, while constantly refining them, which makes
installing and moving robotic systems easy enough for anybody to do.
3. Customer Service
Robots are now being used in a customer service capacity in retail stores and hotels around the
world. Most of these robots leverage AI natural language processing abilities to interact with
customers in a more human way. Often, the more these systems can interact with humans, the more
they learn.
4. Open Source Robotics
A handful of robotic systems are now being sold as open source systems with AI capability. This
way, users can teach their robots to do custom tasks based on their specific application, such as
small-scale agriculture. The convergence of open source robotics and AI could be a huge trend in
the future of AI robots.
Current Trends in Intelligent Systems
Artificial Intelligence is a hot topic for all industries in current times. In fact, 77% of people in the
world already use AI in some form (And the rest of 23% will start using it soon!) Artificial Intelligence
does not only impact the technology industry but any and all industries you can think of! And with
top companies like Google, Facebook, Microsoft, Amazon, etc. working on all possible applications
of AI in multiple fields, there is no doubt that it will make a big difference in the future! Adobe even
predicts that 80% of all emerging technologies will have some AI foundations by 2021.
And this integration of Artificial Intelligence in all existing and emerging technologies is only
increasing year by year. Keeping that in mind, let’s see some of the top Artificial Intelligence
trends :-
1. Artificial Intelligence Enabled Chips
AI-Enabled Chips are the latest trend in Artificial Intelligence. Their popularity can be calculated
from the fact that it will reach an estimated revenue of $91,185 Million in 2025 from $6,638
Million in 2018. While some brands have already integrated AI-Enabled Chips, they will soon be
added to all the latest smartphones. And this is necessary because AI required specialized
processors along with the CPU as just the CPU is not enough. The extra hardware is needed to
perform the complex mathematical computations that are needed for AI models. So, these AIEnabled Chips will make sure that tasks requiring AI such as facial recognition, natural language
processing, object detection, computer vision, etc. much faster.
There are many companies like NVIDIA, Qualcomm, AMD, etc. that are creating AI-Enabled
Chips that will boost the performance of AI applications. In fact, Qualcomm is launching its new
AI-Enabled Snapdragon processors in 2020 that will be able to perform 15 trillion operations per
second with efficiency. This will improve all the AI-based services in the phone like real-time AI
translation, photography, virtual assistants, etc. And all this while utilizing considerably lower
power than expected.
2. Artificial Intelligence and Internet of Things
Artificial Intelligence and the Internet of Things together is a match made in technical heaven!!!
These two technologies used together can change the way technologies operate currently. The IoT
devices create a lot of data that needs to be mined for actionable insights. On the other
hand, Artificial Intelligence algorithms require the data before making any conclusions. So the
data collected by IoT can then used by Artificial Intelligence algorithms to create useful results that
are further implemented by the IoT devices.
One example of this is Smart Home Devices that are becoming more and more popular. In
fact, 28% of all homes in the US could become smart homes by 2021. And businesses are also
increasingly adopting smart devices as they reduce costs and are more efficient as well. Googleowned Nest is the most popular name in this market as it produces smart products like thermostats,
alarm systems, doorbells, etc.
The integration of Artificial Intelligence and Internet of Things has also led to increasingly smart
cities like New York. Here, there are facilities like the Automated Meter Reading (AMR) system
to monitor water usage and solar-powered smart bins that can monitor trash levels and schedule
the waste pick-up on time. And this integration of intelligence is only set to increase in the future
with more and more innovations coming up.
3. Automated Machine Learning
More and more organizations are shifting towards Automated Machine Learning in the coming
years. It is quite complicated and expensive to apply traditional machine learning models in the
real world for all business problems. So a better solution is to use Automated Machine Learning
which allows even ML non-experts to use Machine Learning algorithms and techniques without
being an ML tech wizard!
This means that tools like Google Cloud AutoML that can be used to train custom made and highquality ML models while having the minimum required machine learning expertise will become
quite popular in the future. These tools can create as much customization as required without
knowing the complex workflow of Machine Learning in detail. However, AutoML is not a total
child’s play and some ML expertise is still required to set additional parameters as needed.
Many companies in the US that already use AutoML are BlackLocus, Zenefits, Nationstar
Mortgage, etc. with many more to follow.
4. Artificial Intelligence and Cloud Computing
Artificial Intelligence and Cloud Computing can totally revolutionize the current market and create
new methods of improvement. Currently, it is obvious that AI has huge potential and it is the
technology of the future but the integration of AI also requires experienced employees and
enormous infrastructure. This is where Cloud Computing can provide immense help. Even if
companies don’t have massive computing power and access to large data sets, they can still avail
of the benefits of Artificial Intelligence through the cloud without spending huge sums of money.
At the same time, AI can also be used to monitor and manage issues in the cloud. Some experts
predict that AI can first be used to automate the basic workflow of both the private and public
cloud computing systems and then eventually it can be used to independently create working
scenarios that are more efficient.
Currently, the most famous cloud leaders in the market that incorporate AI into their cloud services
are Amazon Web Service (AWS), Google, IBM, Alibaba, Oracle, etc. These are expected to
grow even more in the future with the increasing popularity of both Artificial Intelligence and
Cloud Computing.
5. Artificial Intelligence CyberSecurity
With the rising popularity of AI, it is even becoming a key player in cybersecurity. The addition of
Artificial Intelligence can improve the analysis, understanding, and prevention of cybercrime. It
can also enhance the cybersecurity measures of companies so that they are safe and secure.
However, it is also expensive and difficult to implement in all applications. Moreover, AI is also a
tool in the hands of cybercriminals who use this to improve and enhance their cyberattacks.
Despite all this, AI will be a critical cybersecurity element in the future. According to a study
conducted by Capgemini Research Institute, AI is necessary for cybersecurity because hackers are
already using it for cyberattacks. 75% of the surveyed executives also believe that AI allows for a
faster response to security breaches.
So companies can start with Artificial Intelligence CyberSecurity by first implementing AI in their
existing CyberSecurity protocols. This can be done by using predictive analytics to detect threats
and malicious activity, using natural language processing for security, enhancing biometric-based
login techniques, etc.
Distributed Artificial Intelligence (DAI)
Distributed Artificial Intelligence (DAI) is an approach to solving complex learning, planning, and
decision making problems. It is embarrassingly parallel, thus able to exploit large scale computation
and spatial distribution of computing resources. These properties allow it to solve problems that
require the processing of very large data sets. DAI systems consist of autonomous learning
processing nodes (agents), that are distributed, often at a very large scale. DAI nodes can act
independently and partial solutions are integrated by communication between nodes, often
asynchronously. By virtue of their scale, DAI systems are robust and elastic, and by necessity,
loosely coupled. Furthermore, DAI systems are built to be adaptive to changes in the problem
definition or underlying data sets due to the scale and difficulty in redeployment.
DAI systems do not require all the relevant data to be aggregated in a single location, in contrast to
monolithic or centralized Artificial Intelligence systems which have tightly coupled and
geographically close processing nodes. Therefore, DAI systems often operate on sub-samples or
hashed impressions of very large datasets. In addition, the source dataset may change or be updated
during the course of the execution of a DAI system.
Goals
The objectives of Distributed Artificial Intelligence are to solve the reasoning, planning, learning and
perception problems of artificial intelligence, especially if they require large data, by distributing the
problem to autonomous processing nodes (agents). To reach the objective, DAI requires:



A distributed system with robust and elastic computation on unreliable and failing resources that
are loosely coupled
Coordination of the actions and communication of the nodes
Subsamples of large data sets and online machine learning
There are many reasons for wanting to distribute intelligence or cope with multi-agent systems.
Mainstream problems in DAI research include the following:



Parallel problem solving: mainly deals with how classic artificial intelligence concepts can be
modified, so that multiprocessor systems and clusters of computers can be used to speed up
calculation.
Distributed problem solving (DPS): the concept of agent, autonomous entities that can
communicate with each other, was developed to serve as an abstraction for developing DPS
systems. See below for further details.
Multi-Agent Based Simulation (MABS): a branch of DAI that builds the foundation for simulations
that need to analyze not only phenomena at macro level but also at micro level, as it is in
many social simulation scenarios.
Parallel AI
Parallel AI has built an industry-leading real-time artificial intelligence platform that allows
companies across various sectors to co-create custom AI business solutions, without the need of
extensive AI expertise.
Parallel AI is founded with the mission to give every institution the power to make better decisions
to change the world. Companies are facing a new industrial revolution that demands new
technologies to make faster, better decisions. Current technologies like Deep Learning are at the core
of smart decision making solutions. However, these systems differ from human intelligence in
crucial ways, in particular, in what they learn and how they learn. Parallel AI is the enterprise grade
AI platform that simplifies the process of building, deploying and scaling Decision Making solutions
for the automation of complex enterprise processes. The technology behind Parallel AI leverages
ML and AI techniques but focuses on decisions rather than predictions. Our goal is to give companies
the technology they need to deliver efficient decision making solutions and do it in a radically simple
way to drive real business outcomes
What is the Difference Between Parallel and
Distributed Computing
The main difference between parallel and distributed computing is that parallel computing
allows multiple processors to execute tasks simultaneously while distributed computing
divides a single task between multiple computers to achieve a common goal.
A single processor executing one task after the other is not an efficient method in a computer.
Parallel computing provides a solution to this issue as it allows multiple processors to execute
tasks at the same time. Modern computers support parallel computing to increase the performance
of the system. On the other hand, distributed computing allows multiple computers to
communicate with each other and accomplish a goal. All these computers communicate and
collaborate with each other by passing messages via the network. Organizations such as Facebook
and Google widely use distributed computing to allow the users to share resources.
What is Parallel Computing
Parallel computing is also called parallel processing. There are multiple processors in parallel
computing. Each of them performs the computations assigned to them. In other words, in parallel
computing, multiple calculations are performed simultaneously. The systems that support parallel
computing can have a shared memory or distributed memory. In shared memory systems, all the
processors share the memory. In distributed memory systems, memory is divided among the
processors.
There are multiple advantages to parallel computing. As there are multiple processors working
simultaneously, it increases the CPU utilization and improves the performance. Moreover, failure
in one processor does not affect the functionality of other processors. Therefore, parallel
computing provides reliability. On the other hand, increasing processors is costly. Furthermore, if
one processor requires instructions of another, the processor might cause latency.
What is Distributed Computing
Distributed computing divides a single task between multiple computers. Each computer can
communicate with others via the network. All computers work together to achieve a common goal.
Thus, they all work as a single entity. A computer in the distributed system is a node while a
collection of nodes is a cluster.
There are multiple advantages of using distributed computing. It allows scalability and makes it
easier to share resources easily. It also helps to perform computation tasks efficiently. On the other
hand, it is difficult to develop distributed systems. Moreover, there can be network issues.
Download