Uploaded by hakedin hayredin

ET group assignment

advertisement
ET group assignment
1.
Measuring the human likeness of AI
Human likeness of Artificial intelligence can be measured by using Turing test
Turing test is a method of inquiry in Artificial intelligence for determining whether or not a
computer is capable of acting like a human being. This testing is named after Alan Turing, the
founder and father of Turing test and an English computer scientist, mathematician and theoretical
biologist. Turing proposed that a computer can be said to possess Artificial intelligence
To ensure that an artificial intelligence can continue to behave like a human, it is important to
continually test and evaluate its performance in a variety of situations. This may involve exposing
the artificial intelligence to new data and scenarios, and monitoring its behavior to ensure that it
continues to behave in a human-like manner. Additionally, ongoing research and development in
the field of artificial intelligence can help to improve the programming and data used to train
artificial intelligence systems, which can help to ensure that they continue to behave like humans in
a wide range of situations.
We can base human-likeness of AI entity with:




Turing test
The cognitive modeling approach
The law of thought
The rational agent approach
The Turing Test, proposed by Alan Turing (Turing, 1950), was designed to provide a satisfactory
operational definition of intelligence. Turing defined intelligent behavior as the ability to achieve
human-level performance in all cognitive tasks, sufficient to fool an interrogator. Roughly speaking,
the test he proposed is that the computer should be interrogated by a human via a teletype, and
passes the test if the interrogator cannot tell if there is a computer or a human at the other end.
Programming a computer to pass the test provides plenty to work on. The computer would need to
possess the following capabilities:




natural language processing to enable it to communicate successfully in English (or some
other human language);
knowledge representation to store information provided before or during the
interrogation;
automated reasoning to use the stored information to answer questions and to draw new
conclusions;
machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Turing's test deliberately avoided direct physical interaction between the interrogator and the
computer, because physical simulation of a person is unnecessary for intelligence. However, the socalled total Turing Test includes a video signal so that the interrogator can test the subject's
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
``through the hatch.'' To pass the total Turing Test, the computer will need


computer vision to perceive objects, and
Robotics to move them about.
Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting like a
human comes up primarily when AI programs have to interact with people, as when an expert
system explains how it came to its diagnosis, or a natural language processing system has a
dialogue with a user. These programs must behave according to certain normal conventions of
human interaction in order to make themselves understood. The underlying representation and
reasoning in such a system may or may not be based on a human model.
The cognitive modeling Approach
Cognitive modeling approach: is an area of computer science that deals with simulating human
problem-solving and mental processing in a computerized model. Such a model can be used to
simulate or predict human behavior or performance on tasks similar to ones modeled and improve
human computer interaction. If we are going to say that a given program thinks like a human, we
must have some way of determining how humans think. We need to get inside the actual workings
of human minds. There are two ways to do this: through introspection--trying to catch our own
thoughts as they go by--or through psychological experiments. Once we have a sufficiently precise
theory of the mind, it becomes possible to express the theory as a computer program. If the
program's input/output and timing behavior matches human behavior, that is evidence that some
of the program's mechanisms may also be operating in humans. They were more concerned with
comparing the trace of its reasoning steps to traces of human subjects solving the same problems.
The interdisciplinary field of cognitive science brings together computer models from AI and
experimental techniques from psychology to try to construct precise and testable theories of the
workings of the human mind. Although cognitive science is a fascinating field in itself, we are not
going to be discussing it all that much in this book. We will occasionally comment on similarities or
differences between AI techniques and human cognition. Real cognitive science, however, is
necessarily based on experimental investigation of actual humans or animals, and we assume that
the reader only has access to a computer for experimentation. We will simply note that AI and
cognitive science continue to fertilize each other, especially in the areas of vision, natural language,
and learning.
The laws of thought approach
The law of thought approach is based on the idea that human thought and reasoning can be
described and formalized using logical principles. By creating artificial intelligence systems that are
based on these principles, we can create systems that can reason and make decisions in a way that
is similar to humans
In the ``laws of thought'' approach to AI, the whole emphasis was on correct inferences. The Greek
philosopher Aristotle was one of the first to attempt to codify ``right thinking,'' that is, irrefutable
reasoning processes. His famous syllogisms provided patterns for argument structures that always
gave correct conclusions given correct premises. For example, ``Socrates is a man; all men are
mortal; therefore Socrates is mortal.'' These laws of thought were supposed to govern the operation
of the mind, and initiated the field of logic.
There are two main obstacles to this approach.
First, it is not easy to take informal knowledge and state it in the formal terms required by
logical notation, particularly when the knowledge is less than 100% certain.
Second, there is a big difference between being able to solve a problem ``in principle'' and
doing so in practice. Even problems with just a few dozen facts can exhaust the
computational resources of any computer unless it has some guidance as to which
reasoning steps to try first. Although both of these obstacles apply to any attempt to build
computational reasoning systems, they appeared first in the logicist tradition because the
power of the representation and reasoning systems are well-defined and fairly well
understood.
For example all a A are B, all B are C; therefore all A are C
Rational Agent Approach
The rational agent approach is based on the idea that an intelligent agent should be able to perceive
its environment, reason about its current state, and take actions that maximize its chances of
achieving its goals. This approach involves creating systems that are capable of learning from
experience and adapting their behavior over time. Acting rationally means acting so as to
achieve one's goals, given one's beliefs. An agent is just something that perceives and acts.
In this approach, AI is viewed as the study and construction of rational agents. In the ``laws
of thought'' approach to AI, the whole emphasis was on correct inferences. Making correct
inferences is sometimes part of being a rational agent, because one way to act rationally is to reason
logically to the conclusion that a given action will achieve one's goals, and then to act on that
conclusion. On the other hand, correct inference is not all of rationality, because there are often
situations where there is no provably correct thing to do, yet something must still be done. There
are also ways of acting rationally that cannot be reasonably said to involve inference. For example,
pulling one's hand off of a hot stove is a reflex action that is more successful than a slower action
taken after careful deliberation.
The study of AI as rational agent design therefore has two advantages.
First, it is more general than the ``laws of thought'' approach, because correct inference is
only a useful mechanism for achieving rationality, and not a necessary one.
Second, it is more amenable to scientific development than approaches based on human
behavior or human thought, because the standard of rationality is clearly defined and
completely general. Human behavior, on the other hand, is well-adapted for one specific
environment and is the product, in part, of a complicated and largely unknown
evolutionary process that still may be far from achieving perfection.
For example, consider the case of this Vacuum cleaner as a Rational agent. It has the
environment as the floor which it is trying to clean. It has sensors like Camera's or dirt
sensors which try to sense the environment. It has the brushes and the suction pumps as
actuators which take action
2. The application of AI in manufacturing and production
There are different application of AI in manufacturing that enhance the manufacturing and
production process these are:-
2. 1. Predictive maintenance
AI systems help manufacturers forecast when or if functional equipment will fail so its
maintenance and repair can be scheduled before the failure occurs. manufacturers can
improve efficiency while reducing the cost of machine failure. Manufacturers leverage AI
technology to identify potential downtime and accidents by analyzing sensor data.
2.2. Price forecasting of raw material
The extreme price volatility of raw materials has always been a challenge for
manufacturers. Businesses have to adapt to the unstable price of raw materials to remain
competitive in the market. AI-powered software like can predict materials prices more
accurately than humans and it learns from its mistakes.
2.3. Quality assurance
Quality assurance is the maintenance of a desired level of quality in a service or product.
Assembly lines are data-driven, interconnected, and autonomous networks. These
assembly lines work based on a set of parameters and algorithms that provide guidelines to
produce the best possible end-products. AI systems can detect the differences from the
usual outputs by using machine vision technology since most defects are visible. When an
end-product is of lower quality than expected, AI systems trigger an alert to users so that
they can react to make adjustments.
2.4. Inventory management
Machine learning solutions can promote inventory planning activities as they are good at
dealing with demand forecasting and supply planning. AI-powered demand forecasting
tools provide more accurate results than traditional demand forecasting methods (ARIMA,
exponential smoothing, etc) engineers use in manufacturing facilities. These tools enable
businesses to manage inventory levels better so that cash-in-stock and out-of-stock
scenarios are less likely to happen.
2.5. Process optimization
AI-powered software can help organizations optimize processes to achieve sustainable
production levels. Manufacturers can prefer AI-powered process mining tools to identify
and eliminate bottlenecks in the organization’s processes. For instance, timely and accurate
delivery to a customer is the ultimate goal in the manufacturing industry. However, if the
company has several factories in different regions, building a consistent delivery system is
difficult.
By using a process mining tool, manufacturers can compare the performance of different
regions down to individual process steps, including duration, cost, and the person
performing the step. These insights help streamline processes and identify bottlenecks so
that manufacturers can take action.
3. There are five basic principles of professional ethics. These are :




Integrity
Objectivity
Professional competence and due care
confidentiality
Professional behavior
Integrity: to be straightforward and honest in all professional and business relationships.
Objectivity: To not allow bias, conflict of interest or undue influence of others to override
professional or business judgments, and having the resolve to ensure those judgements are
ethical.
Professional Competence and Due Care: To maintain professional knowledge and skill
at the level required to ensure that a client or employer receives competent professional
service based on current developments in practice, legislation and techniques, and act
diligently and in accordance with applicable professional standards
Confidentiality :To respect the confidentiality of information acquired as a result of
professional and business relationships and, therefore, not disclose any such information
to third parties without proper and specific authority, unless there is a legal, professional,
or ethical right or duty to disclose, nor use the information for the personal advantage of
the professional accountant or third parties.
Professional Behaviour: To take personal responsibility for demonstrating, and
leadership by adopting, the highest standards of professionalism, by complying with
relevant laws and regulations and accepting the moral obligation to act in a professional
manner in the public interest, avoiding any conduct that discredits the profession.
4. challenges concerning professional ethics that are posed as a result of the rise of
emerging technologies are:


Misuse of Personal Information
Misinformation and deep fake
Lack of Oversight and Acceptance of Responsibility
Misuse of Personal Information
One of the primary ethical dilemmas in our technologically empowered age revolves
around how businesses use personal information. As we browse internet sites, make online
purchases, enter our information on websites, engage with different businesses online and
participate in social media, we are constantly providing personal details. Companies often
gather information to hyper-personalize our online experiences, but to what extent is that
information actually impeding our right to privacy?
Personal information is the new gold, as the saying goes. We have commoditized data
because of the value it provides to businesses attempting to reach their consumer base. But
when does it go too far? For businesses, it’s extremely valuable to know what kind of
products are being searched for and what type of content people are consuming the most.
For political figures, it’s important to know what kind of social or legal issues are getting
the most attention. These valuable data points are often exploited so that businesses or
entities can make money or advance their goals. Facebook in particular has come under fire
several times over the years for selling personal data it gathers on its platform.
Misinformation and Deep Fakes
One thing that became evident during the 2016 and 2020 U.S. presidential elections was
the potential of misinformation to gain a wider support base. The effect created
polarization that has had wide-reaching effects on global economic and political
environments.
In contrast to how information was accessed prior to the internet, we are constantly
flooded with real-time events and news as it breaks. Celebrities and political figures can
disseminate opinions on social media without fact checking, which is then aggregated and
further spread despite its accuracy—or inaccuracy. Information no longer undergoes the
strenuous validation process that we formerly used to publish newspapers and books.
Similarly, we used to believe that video told a story that was undeniably rooted in truth.
But deep fake technology now allows such a sophisticated manipulation of digital imagery
that people appear to be saying and doing things that never happened. The potential for
privacy invasion and misuse of identity is very high with the use of this technology.
Lack of Oversight and Acceptance of Responsibility
Most companies operate with a hybrid stack, comprised of a blend of third-party and
owned technology. As a result, there is often some confusion about where responsibility
lies when it comes to governance, use of big data, cybersecurity concerns and managing
personally identifiable information or PII. Whose responsibility is it really to ensure data is
protected? If you engage a third party for software that processes payments, do you bear
any responsibility if credit card details are breached? The fact is that it’s everyone’s job.
Businesses need to adopt a perspective where all collective parties share responsibility.
Similarly, many experts lobby for a global approach to governance, arguing that local
policing is resulting in fractured policy making and a widespread mismanagement of data.
Similar to climate change, we need to band together if we truly want to see improvement.
5. common ethical rules that must be applied in all technologies are






Transparency
Respect for human values
Fairness
Safety
Accountability
Privacy
5.1. Transparency
AI-based algorithms and techniques must be transparently designed, with a thorough description
as well as a valid justification for being developed, as they play a crucial role in tracking the results
and ensuring their accordance with human morals so that one can unambiguously comprehend,
perceive, and recognize the designs decision-making mechanism. Twitter serves as an eye-opener
here, in 2021 the company faced huge criticism for using AI algorithms to assess racial and gender
bias. Twitter is now making amends to mitigate the damages caused by the algorithm and
implement the six fundamental attributes of AI Ethics. Considering an industrial/Cyber-Physical
System (CPS) environment, transparency is essential for both humans and universal machines.
5.2. Respect for Human Values
AI inventions are obliged to uphold human values and positively affect the progress of individuals
and industries, as well to assure to protect sensitivity toward cultural diversities and beliefs.
5.3. Fairness
Fostering an inclusive environment free from discrimination against employees based on their
gender, color, caste, or religion is essential (including team members from various cultural
backgrounds helps to reduce prejudice and advance inclusivity). In the past, AI algorithms have
been criticized for profiling healthcare data, employees’ resumes, etc. Considering this from a GDPR
perspective, fair use of data in the European jurisdiction is mandatory. Since the fairness aspect
maps across AI fairness and GDPR fair use of data, they must be aligned.
5.4. Safety
Safety relates to both the security of user information and the welfare of individuals. It is essential
to recognize hazards and focus on solutions to eliminate such issues. The users’ ownership over the
data must be protected and preserved by using security techniques such as encryption and giving
users control over what data are used and in what context. This also aligns with the scope of GDPR.
5.5. Accountability
Decision-making procedures should be auditable, particularly when AI is handling private or
sensitive information such as copyright law, or identifying biometrics information or personal
health records.
5.6. Privacy
Protecting user privacy while using AI techniques must be kept as the highest priority. The user’s
permission must be obtained to utilize and preserve their information. The strictest security
measures must be followed to prevent the disclosure of sensitive data.
Lessons must be learnt from Google’s project Nightingale and Ascension lawsuits which were an
outcome of gathering personal data and raised privacy concerns in terms of data sharing and the
use of AI. There are various dilemmas when it comes to the applicability of AI. As an example, AI’s
implementation in self-driving vehicles has raised huge ethical concerns because, when its designed
software was based on a utilitarian approach, in a crash type of situation it would opt for the option
with the least casualties; however, when it was programmed based on the social contract theory,
the autonomous vehicle could not make a decision as it kept looking for pre-set conditions in loops
which ultimately resulted in an accident, as it did not move itself away from the hazard situation .
This is one of the biggest challenges, to enable AI to think similarly to humans and have the same
ethical and moral conduct; however, with the growing autonomous and self-driving industry there
is no going back. Therefore, the only means to control ethical issues related to AI would be to fully
develop the standards and regulations. Risk impact assessment is merely a means for damage
control (analyzing the impact of a breach or vulnerability if exploited). As well, for the cybersecurity
threat landscape , where the threat actors are constantly evolving, regulating AI—where number of
implications are yet to be realized, only best practices and following existing standards and policies
can mitigate risks associated to AI deployments in the Industrial environment.
6.
7
Background
computer vision is an AI field that takes and develops advanced image processing software
for near limitless uses. The human eye can only process so much information before falling
short of expectations and requirements. By training computers to effectively analyze the
real and virtual world through images and photos, vast amounts of high-yield information
can be gathered at a greater pace than humans can on their own.
Scientists and engineers have been trying to develop ways for machines to see and
understand visual data for about 60 years. Experimentation began in 1959 when
neurophysiologists showed a cat an array of images, attempting to correlate a response in
its brain. They discovered that it responded first to hard edges or lines, and scientifically,
this meant that image processing starts with simple shapes like straight edges.
At about the same time, the first computer image scanning technology was developed,
enabling computers to digitize and acquire images. Another milestone was reached in 1963
when computers were able to transform two-dimensional images into three-dimensional
forms. In the 1960s, AI emerged as an academic field of study, and it also marked the
beginning of the AI quest to solve the human vision problem.
1974 saw the introduction of optical character recognition (OCR) technology, which could
recognize text printed in any font or typeface. Similarly, intelligent character recognition
(ICR) could decipher hand-written text using neural networks. Since then, OCR and ICR
have found their way into document and invoice processing, vehicle plate recognition,
mobile payments, machine translation and other common applications.
In 1982, neuroscientist David Marr established that vision works hierarchically and
introduced algorithms for machines to detect edges, corners, curves and similar basic
shapes. Concurrently, computer scientist Kunihiko Fukushima developed a network of cells
that could recognize patterns. The network, called the Neocognitron, included
convolutional layers in a neural network.
By 2000, the focus of study was on object recognition, and by 2001, the first real-time face
recognition applications appeared. Standardization of how visual data sets are tagged and
annotated emerged through the 2000s. In 2010, the ImageNet data set became available. It
contained millions of tagged images across a thousand object classes and provides a
foundation for CNNs and deep learning models used today. In 2012, a team from the
University of Toronto entered a CNN into an image recognition contest. The model, called
AlexNet, significantly reduced the error rate for image recognition. After this breakthrough,
error rates have fallen to just a few percent.
Introduction
Computer vision is a field of artificial intelligence (AI) that enables computers and systems
to derive meaningful information from digital images, videos and other visual inputs — and
take actions or make recommendations based on that information. If AI enables computers
to think, computer vision enables them to see, observe and understand.
Computer vision works much the same as human vision, except humans have a head start.
Human sight has the advantage of lifetimes of context to train how to tell objects apart, how
far away they are, whether they are moving and whether there is something wrong in an
image.
Computer vision trains machines to perform these functions, but it has to do it in much less
time with cameras, data and algorithms rather than retinas, optic nerves and a visual
cortex. Because a system trained to inspect products or watch a production asset can
analyze thousands of products or processes a minute, noticing imperceptible defects or
issues, it can quickly surpass human capabilities.
Advantages of computer vision
Computer vision can automate several tasks without the need for human intervention. As a
result, it provides organizations with a number of advantages:



Faster and simpler process - Computer vision systems can carry out repetitive and
monotonous tasks at a faster rate, which simplifies the work for humans.
Better products and services - Computer vision systems that have been trained very
well will commit zero mistakes. This will result in faster delivery of high-quality
products and services.
Cost-reduction - Companies do not have to spend money on fixing their flawed
processes because computer vision will leave no room for faulty products and
services.
Disadvantages of computer vision
There is no technology that is free from flaws, which is true for computer vision systems.
Here are a few limitations of computer vision:


Lack of specialists - Companies need to have a team of highly trained professionals with
deep knowledge of the differences between AI vs. Machine Learning vs. Deep Learning
technologies to train computer vision systems. There is a need for more specialists that can
help shape this future of technology.
Need for regular monitoring - If a computer vision system faces a technical glitch or breaks
down, this can cause immense loss to companies. Hence, companies need to have a
dedicated team on board to monitor and evaluate these systems.
Application of computer vision

Self-Driving Cars
With the use of computer vision, autonomous vehicles can understand their environment.
Multiple cameras record the environment surrounding the vehicle, which is then sent into
computer vision algorithms that analyzes the photos in perfect sync to locate road edges,
decipher signposts, and see other vehicles, obstacles, and people. Then, the autonomous
vehicle can navigate streets and highways on its own, swerve around obstructions, and get
its passengers where they need to go safely.

Facial Recognition
Facial recognition programs, which use computer vision to recognize individuals in
photographs, rely heavily on this field of study. Facial traits in photos are identified by
computer vision algorithms, which then match those aspects to stored face profiles. In
order to verify the identity of the people using consumer electronics, face recognition is
increasingly being used. Facial recognition is used in social networking applications for
both user detection and user tagging. For the same reason, law enforcement uses face
recognition software to track down criminals using surveillance footage.

Augmented & Mixed Reality
Augmented reality, which allows computers like smartphones and wearable technology to
superimpose or embed digital content onto real-world environments, also relies heavily on
computer vision. Virtual items may be placed in the actual environment through computer
vision in augmented reality equipment. In order to properly generate depth and
proportions and position virtual items in the real environment, augmented reality apps rely
on computer vision techniques to recognize surfaces like tabletops, ceilings, and floors.

Healthcare
Computer vision has contributed significantly to the development of health tech.
Automating the process of looking for malignant moles on a person's skin or locating
indicators in an x-ray or MRI scan is only one of the many applications of computer vision
algorithms.
Current trends of computer vision
Computer vision in health and safety
A key use case for computer vision is spotting dangers and raising alarms when something
is going wrong. Methods have been developed for allowing computers to detect unsafe
behavior on construction sites – such as workers without hard hats or safety harnesses, as
well as monitor environments where heavy machinery such as forklift trucks are working
in proximity to humans, enabling them to be automatically shut down if someone steps into
their path. Of course, preventing the spread of illness caused by viruses is also an important
use case these days, and here computer vision technologies are increasingly being
deployed to monitor compliance with social distancing requirements, as well as maskwearing mandates. Computer vision algorithms have also been developed during the
current pandemic in order to assist with diagnosing infection from chest x-rays by looking
for evidence of infection and damage to images of lungs.
Computer vision in retail
Shopping and retail are other aspects of life where we are sure to notice the increasing
prevalence of computer vision technology during 2023. Amazon has pioneered the concept
of cashier-less stores with its Go grocery stores, equipped with cameras that simply
recognize which items customers are taking from the shelves.
As well as relieving humans of the responsibility of scanning purchases, computer vision has a
number of other uses in retail, including inventory management, where cameras are used to check
stock levels on shelves and in warehouses and automatically order replenishment when necessary.
It's also been used to monitor and understand the movement patterns of customers around stores
in order to optimize the positioning of goods and, of course, in security systems to deter shoplifters.
Computer vision in connected and autonomous cars
Computer vision is an integral element of the connected systems in modern cars. Although
our first thoughts might be of the upcoming autonomous vehicles, it has a number of other
uses in the existing range of “connected” cars that are already on the roads and parked in
our garages. Systems have been developed that use cameras to track facial expressions to
look for warning signs that we may be getting tired and risking falling asleep at the wheel.
As this is said to be a factor in up to 25% of fatal and serious road accidents, it’s clear to see
that measures like this could easily save lives. This technology is already in use in
commercial vehicles such as freight trucks, and in 2022 we could see it start to make its
way into personal cars too. Other proposed uses for computer vision in cars that could
make it from drawing board to reality include monitoring whether seatbelts are being
worn and even whether passengers are leaving keys and phones behind as they leave taxis
and ride-sharing vehicles.
Download