Uploaded by Phattass Caramel

favour naanle

advertisement
COMPUTATIONAL
THINKING
BY
KUMTONG FAVOUR
NAANLE
DEPARTMENT OF
HEALTH SCIENCES
FACULTY OF
PHARMACY
Computer is an electronic device that accepts data or raw facts into the computer which can
be explained as input, processes data and sends out information and this process can be
explained as output. Inputing of data into the computer can be done using devices known as
input devices examples of which are Joystick, Keyboard e .t. c and output of information can
be done by output devices examples of which are Printers, Monitor, Speakers e. t. c.
Computational is a word that relates to computation. Computation is the act or process of
computation, calculation or reckoning. This can also be defined as the result of computation,
the amount computed. This can be explained easily as the process of gathering informations,
making estimates and the result gotten from this process. Thinking as another can be
defined as the act of pondering which can be said as to wonder or think deeply or to
consider something carefully and thoroughly. It can also be defined as the act of
communicating with one’s self in one’s mind, trying to find solutions to problems and
answers to questions well known to one’s self.
First definition of computational thinking can be defined as the process of computing and
the result gotten from computation which is achieved from pondering deeply on finding
solutions to problems. This result can be achieved after deep concentration and
consideration on ways to better the problem at hand or on ground. Finding solutions to
problems can not be rushed but need to be thought about deeply and pondered upon wisely
to achieve a better result at the end of the day. Every problem has a solution that is why
thinking and thoughtful reasoning is encouraged so the problem can be solved.
Secondly, computational thinking is strongly related to computer science (CS), which itself
can be problematic to define satisfactorily. Like CS, CT includes a range of both abstract and
concrete ideas. They both share a universal applicability, and this broadness, while making it
powerful, also makes CT hard to define concisely.
CT is also an idea that’s both new and old. It’s new in the sense that the subject suddenly
became a hotly debated topic in 2006 after Wing’s talk (Wing, 2006). However, many of its
core ideas have already been discussed for several decades, and along the way people have
packaged them up in different ways. For example, as far back as 1980, Seymour Papert of
the Massachusetts Institute of Technology pioneered a technique he called ‘procedural
thinking’ (Papert, 1980). It shared many ideas with what we now think of as CT. Using
procedural thinking, Papert aimed to give students a method for solving problems using
computers as tools. The idea was that students would learn how to create algorithmic
solutions that a computer could then carry out; for this he used the Logo programming
language.4 Papert’s writings have inspired much in CT, although CT has diverged from this
original idea in some respects.
Nevertheless, during the 10 years following Wing’s talk, a number of succinct definitions
were attempted.While they hint at similar ideas, there appears to be some diversity in what
these people say. Perhaps Voogt et al. were right, and our best hope of understanding CT is
to build up those overlapping, criss-crossing concepts. As luck would have it, this work was
already done by Cynthia Selby (Selby, 2013), when she scoured the CT literature for
concepts and divided them into two categories: concepts core to CT, and concepts that are
somehow peripheral and so should be excluded from a definition.
Thirdly computational thinking is defined as a means of identification, step by step solution
to a complex problem. Its definition includes breaking down a problem into smaller parts,
recognizing patterns and extracting extraneous details or information so that the step by
step solutions can be replicated by humans and also computers. This pattern of problem
solving is used in our day to day activity not only in computer science, but also in history,
language, math, art and science. Computational thinking often includes a solution that
involves the use of technology such as computer to carry out the step by step procedure
which is also known as algorithm.
Fourthly, computational thinking is the mental skill to apply concepts, ways, techniques for
problem solving and also logical reasoning which has been derived from computing and this
has been computed to solve problems in all areas of life. In the educational aspect or field,
computational thinking are problem solving methods that has to do with problem solving
techniques, ways or methods and ways to which a computer can be executed. It involves
automation of processes, but also use the use of computer to explore, analyse, and
understand processes and procedures.
Fifthly, computational thinking is widely defined and explained as a set of cognitive skills
and ways of solving problems. The ways of solving problems relates to many fields which
include science and engineering. The history of computational thinking as a concept dates
back at least to the 1950s but most ideas go way back. Computational thinking deals with
ideas like extraction, data representation, pattern recognition, abstraction, decomposition,
algorithm, logically organizing data, which are also prevalent in other kinds of thinking or
reasoning, such as scientific thinking, engineering thinking, systems thinking, design thinking
and model based thinking. Neither the idea nor the term are recent: Preceded by terms of
algorithmizing, procedural thinking, and computational literacy by computing pioneers like
Alan Perlis and Donald Knuth, also the term computational thinking was first used by
Seymour Papert in 1980 and again in 1996. Computational thinking can be used for
algorithm solved scale for complicated problems and is also often used for large and huge
improvements in efficiency.
The phrase computational thinking was brought to the forefront of the computer science
education community in 2006 as a result of a Communications of the ACM essay on the
subject by Jeannette Wing. The essay suggested that thinking computationally was a
fundamental skill for everyone, not just computer scientist because it also relates to other
field of studies, and argued for the importance of integrating computational ideas into other
subjects in school in other to be well taught and learned by students. The essay also said
that by learning computational thinking, children will be better in many ways also in their
everyday tasks. Examples from the essay are finding one’s lost mittens, and knowing when
to stop renting and buying instead. The continuum of computational thinking questions in
education ranges from K-9 computing for children to professional and continuing education,
where the challenge is how to communicate deep principles, maxims, and ways of
reasoning between experts.
For the first ten years computational thinking was a US-centered movement, and still today
that early focus is seen in the field’s research. The field’s most cited articles and most cited
people were active in the early US CT wave, and the field’s most active researcher networks
are US-based. Dominated by US and European researchers, it is unclear to what extent can
field’s predominantly Western body of research literature cater to the needs of students in
other cultural groups.
CHARACTERISTICS
The characteristics also known as the pillars of computational thinking are decomposition,
pattern recognition/data representation, generalisation/abstraction and algorithms. By
decomposing a problem, identifying the variables involved using data representation, and
creating algorithms, a generic solution results. The generic solution is a generalization or
abstraction that can be used to solve a multitude of variations of the initial problem.
Another characterization of computational thinking is the “three A’s” iterative process
based on three stages:
*Abstraction: problem formulation;
*Automation: solution expression;
*Analysis: solution execution and evaluation.
DECOMPOSITION:
Computational thinking promotes one of these heuristics to a core practice: decomposition,
which is an approach that seeks to break a complex problem down into simpler parts that
are easier to deal with. Its particular importance to CT comes from the experiences of
computer science. Programmers and computer scientists usually deal with large, complex
problems that feature multiple interrelated parts. While some other heuristics prove useful
some of the time, decomposition almost invariably helps in managing a complex problem
where a computerised solution is the goal.
Decomposition is a divide-and-conquer strategy, something seen in numerous places
outside computing:

Generals employ it on the battlefield when outnumbered by the enemy. By engaging
only part of the enemy forces, they neutralise their opponent’s advantage of numbers
and defeat them one group at a time.

Politicians use it to break opposition up into weaker parties who might otherwise unite
into a stronger whole.

When faced with a large, diverse audience, marketers segment their potential
customers into different stereotypes and target each one differently.
Within the realm of CT, you use divide and conquer when the problem facing you is too
large or complex to deal with all at once. For example, a problem may contain several
interrelated parts, or a particular process might be made up of numerous steps that need
spelling out. Applying decomposition to a problem requires you to pick all these apart.
By applying decomposition, you aim to end up with a number of sub-problems that can be
understood and solved individually. This may require you to apply the process recursively.
That is to say, the problem is re-formed as a series of smaller problems that, while simpler,
might be still too complex, in which case they too need breaking down, and so on. Visually
this gives the problem definition a tree structure. Decomposition also known as problem
solving is the process of achieving goals by overcoming obstacles ,a frequent part of most
activities. Problems in need of solutions range from simple personal tasks e. g how to turn
on an appliance to complex issues in technical fields. The former is an example of simple
problem solving (SPS) addressing one issue, whereas the latter is complex problem solving
(CPS) with multiple interrelated obstacles. Another classification is into well-defined
problems in which specific obstacles and goals, and ill-defined problems in which the current
situation is troublesome but it is not clear what kind of resolution to aim for. Similarly, one
may distinguish formal or fact-based problems requiring psychometric intelligence, versus
socio-emotional problems which depend on the changeable emotions of individuals or
groups, such as tactful behaviour, fashion, or gift choices.
Solutions require sufficient resources and knowledge to attain the goal. Professionals such as
lawyers, doctors and consultants are largely problem solvers for issues which require
technical skills and knowledge beyond general competence. Many businesses have found
profitable markets by recognizing a problem and creating a solution: the more widespread
and inconvenient the problem, the greater the opportunity to develop a scalable solution.
There are many specialized problem solving techniques and methods in fields such as
engineering, business, medicine, mathematics, computer science, philosophy and social
organization. The mental techniques to identify, analyse, and solve problems are studied in
psychology and cognitive sciences. Additionally, the mental obstacles preventing people
from finding solutions is a widely researched topic: problem solving impediments include
confirmation bias, mental set, and functional fixedness.
There are two different types of problems: ill-defined and well-defined; different approaches
are used for each. Well-defined problems have specific end goals and clearly expected
solutions, while ill-defined problems do not. Well-defined problems allow for more initial
planning than ill-defined problems. Solving problems sometimes involves dealing with
pragmatics, the way that context contributes to meaning, and semantics, the interpretation
of the problem. The ability to understand what the end goal of the problem is, and what
rules could be applied represents the key to solving the problem. Sometimes the problem
requires abstract thinking or coming up with a creative solution. Decomposition means
splitting problems into a much faster and easier way to find solutions to the problem.
Examples of Decomposition in Curriculum
Indeed, decomposition is a powerful tool that guides how we approach projects and tasks
regularly. And it is also something employed in student learning. Here are some examples for
accentuating these in curriculum.
English Language Arts: Students analyze themes in a text by first answering: Who is the
protagonist and antagonist? Where is the setting? What is the conflict? What is the
resolution?
Mathematics: Students find the area of different shapes by decomposing them into
triangles.
Science: Students research the different organs in order to understand how the human body
digests food.
Social Studies: Students explore a different culture by studying the traditions, history, and
norms that comprise it.
Languages: Students learn about sentence structure in a foreign language by breaking it
down into different parts like subject, verb, and object.
Arts: Students work to build the set for a play by reviewing the scenes to determine their
setting and prop needs.
Examples of Decomposition in Computer Science
Then, from a computer science and coding perspective, decomposition can come into play
when students are programming a new game. For example, students need to consider the
characters, setting, and plot, as well as consider how different actions will take place, how it
will be deployed, and so much more.
It’s hopefully clear that decomposition is deeply ingrained in how we function daily and
address problems both big and small. The concept already exists with students, but students
need to learn how to recognize this process as it happens and leverage it when they feel
overwhelmed in the case of a problem, task, or project. Decomposition teaches students
embrace ambiguity and equips them with the confidence to learn new things.
ABSTRACTION
A way of expressing an idea in a specific content while at the same time removing
contexts that are irrelevant in the context. The essence of abstractions is preserving
information that is relevant in a given context, and forgetting information that is
irrelevant in that context.
(Guttag, 2013)
Abstraction is a key feature of both computer science and computational thinking. Some
have gone so far as to describe computer science as ‘the automation of abstraction’ (Wing,
2014).
The reasoning behind this goes right to the core of what programmers and computer
scientists are trying to do that is, solve real-world problems using computers. They cannot
magically transport the real world into the computer; instead, they have to describe the real
world to the computer. But the real world is messy, filled with lots of noise and endless
details. We cannot describe the world in its entirety. There is too much information and
sometimes we do noteven fully understand the way it works. Instead, we create models of
the real world and then reason about the problem via these models. Once we achieve a
sufficient level of understanding, we teach the computer how to use these models (i.e.
program it)
Perhaps if we were hyper-intelligent super beings with limitless mental capacity, we would
not need abstractions, and we could take into account every minute detail of our solutions. I
hate to be the one to break it to you, but we are not super beings; we are an intelligent species
of ape with the ability to keep about seven or so pieces of information in working memory at
any one time.
Consequently, we are stuck struggling to understand the world via our own models of it.
Examples of abstractions
In the previous chapter’s drawing example, we created a shape as an abstraction. By itself,
the idea of a shape tells you some things – such as that we’re dealing with a form that has
an external boundary or surface – but other things, like the number of its sides and its
internal angles, are unknown details.
A shape might seem an elementary and academy. In fact, we are surrounded by
abstractions in everyday life. Many familiar things are abstract representations of a more
detailed reality. If you ask a computer scientist what’s so great about abstractions, they may
well give you a more technical example, say, an emaiThat last example of an email hinted
towards an important fact about abstractions: context is key. An abstraction of something
operates at a certain level of detail and puts a layer over the top to obscure some of the
information. You can take any abstraction, peel back the layer and go further ‘down’ to
reveal more information (thus considering it in a more detailed context), or go further ‘up’,
adding more layers to suppress further details about it (a less detailed context). ‘An email
application makes readable the contents of computer memory’, as well as ‘computer
memory stores as digital information the signals from the network’l. Most of the world
knows what an email is: a message written on a computer and transmitted via a network to
another user. However, this is an abstraction in the same way that ‘letter’ is an abstraction
for a piece of paper with intelligible ink markings on it. There’s a lot more underlying detail
to an email. If you imagine an email, you probably think of the text in your mail client. Let’s
look at a couple of real-world examples to illustrate this. Imagine a vehicle rental company
builds a system to manage their inventory. Using the techniques from Chapter 3 (for
example, identifying entities, the processes they carry out and the properties they possess),
the system designers break down the task and pick out the various concepts. The company
has several types of vehicles (cars, vans, motorcycles, etc.) and numerous operations are
carried out daily (booking, checking in, checking out, cleaning, road testing, etc.). Once the
company identifies all these concepts, it’s up to them how to define their abstractions. For
example:
If the company offers a no-frills ‘any-old-car-will-do’ service, then all the individual
cars could be grouped into one abstraction (called, imaginatively, a car).

Alternatively, if they want to give customers more choice, they could group the cars
according to transmission, giving them ‘manual cars’ and ‘automatic cars’.

All the vans could be classified according to capacity, yielding the abstractions ‘small
van’ or ‘large van’.

When it comes to obtaining a complete listing of everything available at a specific
site, no discrimination among all the various types is necessary. In this case, everything
– cars, vans, motorcycles – can be dealt with using the single abstraction ‘vehicle’.

You can see that each level of abstraction is built up recursively from the levels below.24 All
the Fords and Audis and Nissans, and so on, join to become cars. All the cars join the vans
and motorcycles to become vehicles. All the vehicles, spare parts, and tools can be treated
collectively as inventory items Looking at it this way, the vehicle rental company takes
numerous concrete ideas and unifies them into more abstract concepts. However, building
abstractions can sometimes go the other way: that is, start with a fairly abstract concept
and add more detail as you go along.
Let’s take a different example to illustrate this. An online video service like YouTube or
Netflix can recommend videos to a user because it has built an abstract model of them. An
ideal model of the user would include a perfect replica of their neurons and brain chemistry,
so it would know with uncanny accuracy which videos they would like to watch at any
moment. Or, failing that, maybe a complete and thorough biography including a run-down
of every book, film and TV show they’ve ever consumed.
However, online video services operate with far less information than that. They know
almost nothing about the viewer aside from their video-watching habits (O’Reilly, 2016).
This means the video service has to reduce the customer to an abstract video-watching
entity. The service knows what kinds of videos the viewer watched in the past and so might
push a list of other unseen videos based on that information. It assumes the viewer is most
interested in other videos of the same genre.
At the time of writing, online video platforms are in a formative period and such models are
constantly changing and rapidly becoming more sophisticated. But, at the moment,
however powerful their approaches are, they’re still operating by building an abstract
model of you, the viewer, based on a tiny subset of information.
Producing better recommendations might mean introducing more detail into their models
and taking into account more about the viewer. That would mean having to operate at a
lower level of abstraction, which opens up considerations of what is important or
meaningful at that level. A service may decide to divide users by geographical region, so
that there isn’t just a single concept of a viewer; rather, there is an American viewer, a
British viewer, a German viewer and so on. Now, the procedure for building a list of
recommendations can be extended with additional steps matching a viewer’s region to
certain types of films. A German viewer’s recommendations would widen to also include
some German-language films. But the German-language films wouldn’t be included in
recommendations outside German-speaking regions.
Exercising caution
Abstractions undoubtedly help, but they should also come with a warning label. Perhaps
‘Caution: Abstractions may distract from reality.’
The reason you should be cautious is that abstractions are a way of avoiding details, either
by suppressing or deferring them. As in life, when you do that carefully and with good
foresight, it can be useful and productive. Doing it recklessly may very well come back to
bite you later because it can be easy to get lost in the abstractions. A fuzzy marketing
message might imply that a product solves more problems than it really does; a
philosophical argument that tries to account for everything might be so lacking in detail that
it explains nothing.
In the end, the real question is: how does it work? You can’t execute an abstraction. The
acid test for a solution depends on how well the concrete implementation works.
Putting abstractions to use
Abstractions are fine for doing things like organising your solution, managing its complexity
and reasoning about its behaviour in general terms. But when you actually put the solution
into use, it has to work with some level of detail. Put another way: you can’t tell a computer
to draw a shape, but you can tell it to draw a circle.
To put an abstraction to use requires it to be made concrete somehow. In other words, it is
instantiated and this requires attention to details. For example, the car rental company may
need to list all vehicles according to how soon they need servicing. A generalised procedure
for doing this needs some concrete detail to work with. Specifically, the date of its next
service must be able to be specified for each vehicle. ‘Next service date’ is therefore a
property of every vehicle. Also, another procedure may involve showing all vans above a
specified payload.25 This procedure only makes sense for vans, so only they need to provide
this information when instantiated (see Figure 4.5).
Leaking details
Even after sorting out the underlying details, a risk persists with the use of abstractions: you
may have unwittingly ignored a detail that affects the way the abstraction behaves.
Sticking with our automotive theme, the way you operate a car can be considered as an
abstraction. A car is a very complicated piece of machinery, but operating it is a simple
matter because all those details are hidden behind a few relatively simple control
mechanisms: steering wheel, pedals and gear lever. But occasionally, details hidden behind
this clean interface affect how the driver uses the car. Revving the accelerator pedal too
high can cause wear on the engine. Braking distance will increase over time as the brake
pads become worn. The engine may not start if the weather is too cold. When things like
this happen, hidden details have become important to how something works; those details
have ‘leaked’ out of the abstraction. In actuality, no complex abstraction is perfect and all of
them will likely leak at some point. But there’s no reason to treat this as bad news. The
same has been happening in science for centuries.
You may have learned at school (although it might have been phrased differently) that all of
science is built on abstractions. Scientists cannot take every single aspect of something into
consideration when studying it, so they build simplified models of it: models that ignore
friction and air resistance, or models that assume no losses to heat transfer. Technically,
such models don’t fully reflect reality, but that’s not the most important thing. What’s
important is that they are shown to work.
I know of no general advice to anticipate leaky abstractions ahead of time. However, when
yours are shown to leak, you should do as the scientist does: amend your model to take the
problem into account. Details that turn out to be important need to be brought explicitly
into the model.
PATTERN RECOGNITION
Pattern recognition is the automated recognition of patterns and regularities in data. It has
applications in statistical data analysis, signal processing, image analysis, information
retrieval, bioinformatics, data compression, computer graphics and machine learning.
Pattern recognition has its origins in statistics and engineering; some modern approaches to
pattern recognition include the use of machine learning, due to the increased availability
of big data and a new abundance of processing power. These activities can be viewed as two
facets of the same field of application, and they have undergone substantial development
over the past few decades.
The primary goal of pattern recognition is supervised or unsupervised classification. Among
the various frameworks in which pattern recognition has been traditionally formulated, the
statistical approach has been most intensively studied and used in practice. More recently,
neural network techniques and methods imported from statistical learning theory have been
receiving increasing attention. The design of a recognition system requires careful attention
to the following issues: definition of pattern classes, sensing environment, pattern
representation, feature extraction and selection, cluster analysis, classifier design and
learning, selection of training and test samples, and performance evaluation. In spite of
almost 50 years of research and development in this field, the general problem of
recognizing complex patterns with arbitrary orientation, location, and scale remains
unsolved. New and emerging applications, such as data mining, web searching, retrieval of
multimedia data, face recognition, and cursive handwriting recognition, require robust and
efficient pattern recognition techniques. The objective of this review paper is to summarize
and compare some of the well-known methods used in various stages of a pattern
recognition system and identify research topics and applications which are at the forefront
of this exciting and challenging field.
Pattern recognition systems are commonly trained from labeled "training" data. When
no labeled data are available, other algorithms can be used to discover previously unknown
patterns. KDD and data mining have a larger focus on unsupervised methods and stronger
connection to business use. Pattern recognition focuses more on the signal and also takes
acquisition and signal processing into consideration. It originated in engineering, and the
term is popular in the context of computer vision: a leading computer vision conference is
named Conference on Computer Vision and Pattern Recognition.
In machine learning, pattern recognition is the assignment of a label to a given input value.
In statistics, discriminant analysis was introduced for this same purpose in 1936. An example
of pattern recognition is classification, which attempts to assign each input value to one of a
given set of classes (for example, determine whether a given email is "spam"). Pattern
recognition is a more general problem that encompasses other types of output as well.
Other examples are regression, which assigns a real-valued output to each input;[1] sequence
labeling, which assigns a class to each member of a sequence of values[2] (for example, part
of speech tagging, which assigns a part of speech to each word in an input sentence);
and parsing, which assigns a parse tree to an input sentence, describing the syntactic
structure of the sentence.[3]
Pattern recognition algorithms generally aim to provide a reasonable answer for all possible
inputs and to perform "most likely" matching of the inputs, taking into account their
statistical variation. This is opposed to pattern matching algorithms, which look for exact
matches in the input with pre-existing patterns. A common example of a pattern-matching
algorithm is regular expression matching, which looks for patterns of a given sort in textual
data and is included in the search capabilities of many text editors and word processors.
Pattern recognition is generally categorized according to the type of learning procedure used
to generate the output value. Supervised learning assumes that a set of training data
(the training set) has been provided, consisting of a set of instances that have been properly
labeled by hand with the correct output. A learning procedure then generates a model that
attempts to meet two sometimes conflicting objectives: Perform as well as possible on the
training data, and generalize as well as possible to new data (usually, this means being as
simple as possible, for some technical definition of "simple", in accordance with Occam's
Razor, discussed below). Unsupervised learning, on the other hand, assumes training data
that has not been hand-labeled, and attempts to find inherent patterns in the data that can
then be used to determine the correct output value for new data instances.[5] A combination
of the two that has been explored is semi-supervised learning, which uses a combination of
labeled and unlabeled data (typically a small set of labeled data combined with a large
amount of unlabeled data). In cases of unsupervised learning, there may be no training data
at all.
Sometimes different terms are used to describe the corresponding supervised and
unsupervised learning procedures for the same type of output. The unsupervised equivalent
of classification is normally known as clustering, based on the common perception of the
task as involving no training data to speak of, and of grouping the input data into clusters
based on some inherent similarity measure (e.g. the distance between instances, considered
as vectors in a multi-dimensional vector space), rather than assigning each input instance
into one of a set of pre-defined classes. In some fields, the terminology is different.
In community ecology, the term classification is used to refer to what is commonly known as
"clustering".
The piece of input data for which an output value is generated is formally termed
an instance. The instance is formally described by a vector of features, which together
constitute a description of all known characteristics of the instance. These feature vectors
can be seen as defining points in an appropriate multidimensional space, and methods for
manipulating vectors in vector spaces can be correspondingly applied to them, such as
computing the dot product or the angle between two vectors. Features typically are
either categorical (also known as nominal, i.e., consisting of one of a set of unordered items,
such as a gender of "male" or "female", or a blood type of "A", "B", "AB" or
"O"), ordinal (consisting of one of a set of ordered items, e.g., "large", "medium" or
"small"), integer-valued (e.g., a count of the number of occurrences of a particular word in
an email) or real-valued (e.g., a measurement of blood pressure). Often, categorical and
ordinal data are grouped together, and this is also the case for integer-valued and realvalued data. Many algorithms work only in terms of categorical data and require that realvalued or integer-valued data be discretized into groups (e.g., less than 5, between 5 and 10,
or greater than 10).
Probabilistic classifiers[edit]
Main article: Probabilistic classifier
Many common pattern recognition algorithms are probabilistic in nature, in that they
use statistical inference to find the best label for a given instance. Unlike other algorithms,
which simply output a "best" label, often probabilistic algorithms also output a probability of
the instance being described by the given label. In addition, many probabilistic algorithms
output a list of the N-best labels with associated probabilities, for some value of N, instead
of simply a single best label. When the number of possible labels is fairly small (e.g., in the
case of classification), N may be set so that the probability of all possible labels is output.
Probabilistic algorithms have many advantages over non-probabilistic algorithms:



They output a confidence value associated with their choice. (Note that some other
algorithms may also output confidence values, but in general, only for probabilistic
algorithms is this value mathematically grounded in probability theory. Non-probabilistic
confidence values can in general not be given any specific meaning, and only used to
compare against other confidence values output by the same algorithm.)
Correspondingly, they can abstain when the confidence of choosing any particular
output is too low.
Because of the probabilities output, probabilistic pattern-recognition algorithms can be
more effectively incorporated into larger machine-learning tasks, in a way that partially
or completely avoids the problem of error propagation.
Number of important feature variables[edit]
Feature selection algorithms attempt to directly prune out redundant or irrelevant features.
A general introduction to feature selection which summarizes approaches and challenges,
has been given.[6] The complexity of feature-selection is, because of its non-monotonous
character, an optimization problem where given a total of
features
the powerset consisting of all
subsets of features need to be explored. The Branch-and[7]
Bound algorithm does reduce this complexity but is intractable for medium to large values
of the number of available features
Techniques to transform the raw feature vectors (feature extraction) are sometimes used
prior to application of the pattern-matching algorithm. Feature extraction algorithms
attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector
that is easier to work with and encodes less redundancy, using mathematical techniques
such as principal components analysis (PCA). The distinction between feature
selection and feature extraction is that the resulting features after feature extraction has
taken place are of a different sort than the original features and may not easily be
interpretable, while the features left after feature selection are simply a subset of the
original features.
The first pattern classifier – the linear discriminant presented by Fisher – was developed in
the frequentist tradition. The frequentist approach entails that the model parameters are
considered unknown, but objective. The parameters are then computed (estimated) from
the collected data. For the linear discriminant, these parameters are precisely the mean
vectors and the covariance matrix. Also the probability of each class
is estimated from
the collected dataset. Note that the usage of 'Bayes rule' in a pattern classifier does not
make the classification approach Bayesian.
Bayesian statistics has its origin in Greek philosophy where a distinction was already made
between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction
between what is a priori known – before observation – and the empirical knowledge gained
from observations. In a Bayesian pattern classifier, the class probabilities
can be chosen
by the user, which are then a priori. Moreover, experience quantified as a priori parameter
values can be weighted with empirical observations – using e.g., the Beta- (conjugate prior)
and Dirichlet-distributions. The Bayesian approach facilitates a seamless intermixing
between expert knowledge in the form of subjective probabilities, and objective
observations.
Probabilistic pattern classifiers can be used according to a frequentist or a Bayesian
approach.
Within medical science, pattern recognition is the basis for computer-aided diagnosis (CAD)
systems. CAD describes a procedure that supports the doctor's interpretations and findings.
Other typical applications of pattern recognition techniques are automatic speech
recognition, speaker identification, classification of text into several categories (e.g., spam or
non-spam email messages), the automatic recognition of handwriting on postal envelopes,
automatic recognition of images of human faces, or handwriting image extraction from
medical forms.[9][10] The last two examples form the subtopic image analysis of pattern
recognition that deals with digital images as input to pattern recognition systems. [11][12]
Optical character recognition is an example of the application of a pattern classifier. The
method of signing one's name was captured with stylus and overlay starting in 1990. [citation
needed] The strokes, speed, relative min, relative max, acceleration and pressure is used to
uniquely identify and confirm identity. Banks were first offered this technology, but were
content to collect from the FDIC for any bank fraud and did not want to inconvenience
customers.[citation needed]
Pattern recognition has many real-world applications in image processing. Some examples
include:




identification and authentication: e.g., license plate recognition,[13] fingerprint
analysis, face detection/verification;,[14] and voice-based authentication.[15]
medical diagnosis: e.g., screening for cervical cancer (Papnet),[16] breast tumors or heart
sounds;
defense: various navigation and guidance systems, target recognition systems, shape
recognition technology etc.
mobility: advanced driver assistance systems, autonomous vehicle technology,
etc.[17][18][19][20][21]
In psychology, pattern recognition is used to make sense of and identify objects, and is
closely related to perception. This explains how the sensory inputs humans receive are made
meaningful. Pattern recognition can be thought of in two different ways. The first concerns
template matching and the second concerns feature detection. A template is a pattern used
to produce items of the same proportions. The template-matching hypothesis suggests that
incoming stimuli are compared with templates in the long-term memory. If there is a match,
the stimulus is identified. Feature detection models, such as the Pandemonium system for
classifying letters (Selfridge, 1959), suggest that the stimuli are broken down into their
component parts for identification. One observation is a capital E having three horizontal
lines and one vertical line.
ALGORITHM
Algorithmic thinking is a derivative of computer science and the process to develop
code and program applications. This approach automates the problem-solving process by
creating a series of systematic, logical steps that intake a defined set of inputs and produce
a defined set of outputs based on these.
In other words, algorithmic thinking is not solving for a specific answer; instead, it solves
how to build a sequential, complete, and replicable process that has an end point – an
algorithm. Designing an algorithm helps students to both communicate and interpret clear
instructions for a predictable, reliable output.
It can be understood by taking the example of cooking a new recipe. To cook a new recipe,
one reads the instructions and steps and executes them one by one, in the given sequence.
The result thus obtained is the new dish is cooked perfectly. Every time you use your
phone, computer, laptop, or calculator you are using Algorithms. Similarly, algorithms help
to do a task in programming to get the expected output.
The Algorithm designed are language-independent, i.e. they are just plain instructions that
can be implemented in any language, and yet the output will be the same, as expected.
Algorithms for pattern recognition depend on the type of label output, on whether learning
is supervised or unsupervised, and on whether the algorithm is statistical or non-statistical in
nature. Statistical algorithms can further be categorized as generative or discriminative.
Classification methods (methods predicting categorical labels)[edit]
Main article: Statistical classification
Parametric:[23]



Linear discriminant analysis
Quadratic discriminant analysis
Maximum entropy classifier (aka logistic regression, multinomial logistic regression):
Note that logistic regression is an algorithm for classification, despite its name. (The
name comes from the fact that logistic regression uses an extension of a linear
regression model to model the probability of an input being in a particular class.)
Nonparametric:[24]






Decision trees, decision lists
Kernel estimation and K-nearest-neighbor algorithms
Naive Bayes classifier
Neural networks (multi-layer perceptrons)
Perceptrons
Support vector machines

Gene expression programming
Clustering methods (methods for classifying and predicting categorical labels)[edit]
Main article: Cluster analysis





Categorical mixture models
Hierarchical clustering (agglomerative or divisive)
K-means clustering
Correlation clustering
Kernel principal component analysis (Kernel PCA)
Ensemble learning algorithms (supervised meta-algorithms for combining multiple learning
algorithms together)[edit]
Main article: Ensemble learning




Boosting (meta-algorithm)
Bootstrap aggregating ("bagging")
Ensemble averaging
Mixture of experts, hierarchical mixture of experts
General methods for predicting arbitrarily-structured (sets of) labels[edit]


Bayesian networks
Markov random fields
Multilinear subspace learning algorithms (predicting labels of multidimensional data using
tensor representations)[edit]
Unsupervised:

Multilinear principal component analysis (MPCA)
Real-valued sequence labeling methods (predicting sequences of real-valued labels)[edit]
Main article: sequence labeling


Kalman filters
Particle filters
Regression methods (predicting real-valued labels)[edit]
Main article: Regression analysis




Gaussian process regression (kriging)
Linear regression and extensions
Independent component analysis (ICA)
Principal components analysis (PCA)
Sequence labeling methods (predicting sequences of categorical labels)[edit]



Conditional random fields (CRFs)
Hidden Markov models (HMMs)
Maximum entropy Markov models (MEMMs)


Recurrent neural networks (RNNs)
Dynamic time warping (DTW)
As one would not follow any written instructions to cook the recipe, but only the standard
one. Similarly, not all written instructions for programming is an algorithms. In order for
some instructions to be an algorithm, it must have the following characteristics:

Clear and Unambiguous: The algorithm should be clear and unambiguous. Each of its
steps should be clear in all aspects and must lead to only one meaning.
 Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined
inputs. It may or may not take input.
 Well-Defined Outputs: The algorithm must clearly define what output will be yielded
and it should be well-defined as well. It should produce at least 1 output.
 Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.
 Feasible: The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology or
anything.
 Language Independent: The Algorithm designed must be language-independent, i.e. it
must be just plain instructions that can be implemented in any language, and yet the
output will be the same, as expected.
 Input: An algorithm has zero or more inputs. Each that contains a fundamental
operator must accept zero or more inputs.

Output: An algorithm produces at least one output.Every instruction that contains a
fundamental operator must accept zero or more inputs.
 Definiteness: All instructions in an algorithm must be unambiguous, precise, and easy
to interpret. By referring to any of the instructions in an algorithm one can clearly
understand what is to be done. Every fundamental operator in instruction must be
defined without any ambiguity.
 Finiteness: An algorithm must terminate after a finite number of steps in all test cases.
Every instruction which contains a fundamental operator must be terminated within a
finite amount of time. Infinite loops or recursive functions without base conditions do
not possess finiteness.
 Effectiveness: An algorithm must be developed by using very basic, simple, and
feasible operations so that one can trace it out by using just paper and pencil.
Properties of Algorithm:
 It should terminate after a finite time.
 It should produce at least one output.
 It should take zero or more input.
 It should be deterministic means giving the same output for the same input case.
 Every step in the algorithm must be effective i.e. every step should do some work.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm: It is the simplest approach for a problem. A brute force
algorithm is the first approach that comes to finding when we see a problem.
2. Recursive Algorithm: A recursive algorithm is based on recursion. In this case, a problem
is broken into several sub-parts and called the same function again and again.
3. Backtracking Algorithm: The backtracking algorithm basically builds the solution by
searching among all possible solutions. Using this algorithm, we keep on building the
solution following criteria. Whenever a solution fails we trace back to the failure point and
build on the next solution and continue this process till we find the solution or all possible
solutions are looked after.
4. Searching Algorithm: Searching algorithms are the ones that are used for searching
elements or groups of elements from a particular data structure. They can be of different
types based on their approach or the data structure in which the element should be found.
5. Sorting Algorithm: Sorting is arranging a group of data in a particular manner according
to the requirement. The algorithms which help in performing this function are called
sorting algorithms. Generally sorting algorithms are used to sort groups of data in an
increasing or decreasing manner.
6. Hashing Algorithm: Hashing algorithms work similarly to the searching algorithm. But
they contain an index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm: This algorithm breaks a problem into sub-problems,
solves a single sub-problem and merges the solutions together to get the final solution. It
consists of the following three steps:
 Divide
 Solve
 Combine
8. Greedy Algorithm: In this type of algorithm the solution is built part by part. The
solution of the next part is built based on the immediate benefit of the next part. The one
solution giving the most benefit will be chosen as the solution for the next part.
9. Dynamic Programming Algorithm: This algorithm uses the concept of using the already
found solution to avoid repetitive calculation of the same part of the problem. It divides
the problem into smaller overlapping subproblems and solves them.
10. Randomized Algorithm: In the randomized algorithm we use a random number so it
gives immediate benefit. The random number helps in deciding the expected outcome.
To learn more about the types of algorithms refer to the article about “Types of
Algorithms“.
Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given problem.
 In Algorithm the problem is broken down into smaller pieces or steps hence, it is easier
for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in Algorithms(imp).
How to Design an Algorithm?
In order to write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem definition.
2. The constraints of the problem must be considered while solving the problem.
3. The input to be taken to solve the problem.
4. The output to be expected when the problem is solved.
5. The solution to this problem, is within the given constraints.
Then the algorithm is written with the help of the above parameters such that it solves the
problem.
Example: Consider the example to add three numbers and print the sum.




Step 1: Fulfilling the pre-requisites
As discussed above, in order to write an algorithm, its pre-requisites must
be fulfilled.
1. The problem that is to be solved by this algorithm: Add 3 numbers and print their
sum.
2. The constraints of the problem that must be considered while solving the
problem: The numbers must contain only digits and no other characters.
3. The input to be taken to solve the problem: The three numbers to be added.
4. The output to be expected when the problem is solved: The sum of the three
numbers taken as the input i.e. a single integer value.
5. The solution to this problem, in the given constraints: The solution consists of
adding the 3 numbers. It can be done with the help of ‘+’ operator, or bit-wise, or
any other method.
Step 2: Designing the algorithm
Now let’s design the algorithm with the help of the above pre-requisites:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2 and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2, and num3
respectively.
4. Declare an integer variable sum to store the resultant sum of the 3 numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END
Step 3: Testing the algorithm by implementing it.
In order to test the algorithm, let’s implement it in C language.
How to analyze an Algorithm?
For a standard algorithm to be good, it must be efficient. Hence the efficiency of an
algorithm must be checked and maintained. It can be in two stages:
1. Priori Analysis: “Priori” means “before”. Hence Priori analysis means checking the
algorithm before its implementation. In this, the algorithm is checked when it is written
in the form of theoretical steps. This Efficiency of an algorithm is measured by
assuming that all other factors, for example, processor speed, are constant and have
no effect on the implementation. This is done usually by the algorithm designer. This
analysis is independent of the type of hardware and language of the compiler. It gives
the approximate answers for the complexity of the program.
2. Posterior Analysis: “Posterior” means “after”. Hence Posterior analysis means checking
the algorithm after its implementation. In this, the algorithm is checked by
implementing it in any programming language and executing it. This analysis helps to
get the actual and real analysis report about correctness(for every possible input/s if it
shows/returns correct output or not), space required, time consumed etc. That is, it is
dependent on the language of the compiler and the type of hardware used.
What is Algorithm complexity and how to find it?
An algorithm is defined as complex based on the amount of Space and Time it consumes.
Hence the Complexity of an algorithm refers to the measure of the Time that it will need to
execute and get the expected output, and the Space it will need to store all the data
(input, temporary data and output). Hence these two factors define the efficiency of an
algorithm.
The two factors of Algorithm Complexity are:
 Time Factor: Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
 Space Factor: Space is measured by counting the maximum memory space required by
the algorithm to run/execute.
Therefore the complexity of an algorithm can be divided into two types:
1. Space Complexity: The space complexity of an algorithm refers to the amount of
memory required by the algorithm to store the variables and get the result. This can be for
inputs, temporary operations, or outputs.
How to calculate Space Complexity?
The space complexity of an algorithm is calculated by determining the following 2
components:

Fixed Part: This refers to the space that is definitely required by the algorithm. For
example, input variables, output variables, program size, etc.
 Variable Part: This refers to the space that can be different based on the
implementation of the algorithm. For example, temporary variables, dynamic memory
allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the
fixed part and S(I) is the variable part of the algorithm, which depends on instance
characteristic I.
Time Complexity: The time complexity of an algorithm refers to the amount of time that is
required by the algorithm to execute and get the result. This can be for normal operations,
conditional if-else statements, loop statements, etc.
How to calculate Time Complexity?
The time complexity of an algorithm is also calculated by determining the following 2
components:
 Constant time part: Any instruction that is executed just once comes in this part. For
example, input, output, if-else, switch, arithmetic operations etc.
 Variable Time Part: Any instruction that is executed more than once, say n times,
comes in this part. For example, loops, recursion, etc.
Therefore Time complexity
of any algorithm P is T(P) = C + TP(I), where C is the
constant time part and TP(I) is the variable part of the algorithm, which depends on the
instance characteristic I.
How to express an Algorithm?
1. Natural Language :- Here we express the Algorithm in natural English language. It is too
hard to understand the algorithm from it.
2. Flow Chat :- Here we express the Algorithm by making graphical/pictorial
representation of it. It is easier to understand than Natural Language.
3. Pseudo Code :- Here we express the Algorithm in the form of annotations and
informative text written in plain English which is very much similar to the real code but
as it has no syntax like any of the programming language, it can’t be compiled or
interpreted by the computer. It is the best way to express an algorithm because it can
be understood by even a layman with some school level programming knowledge.
4. And like computational thinking and its other elements we’ve discussed, algorithms
are something we experience regularly in our lives.
5. If you’re an amateur chef or a frozen meal aficionado, you follow recipes and
directions for preparing food, and that’s an algorithm.
6. When you’re feeling groovy and bust out in a dance routine – maybe the Cha Cha
Slide, the Macarena, or Flossing – you are also following a routine that emulates an
algorithm and simultaneously being really cool.
7. Outlining a process for checking out books in a school library or instructions for
cleaning up at the end of the day is developing an algorithm and letting your inner
computer scientist shine.
8.
9. Examples of Algorithms in Curriculum
10. Beginning to develop students’ algorithmic prowess, however, does not require
formal practice with coding or even access to technology. Have students map
directions for a peer to navigate a maze, create visual flowcharts for tasks, or develop
a coded language.
11. To get started, here are ideas for incorporating algorithmic thinking in different
subjects.
12. English Language Arts: Students map a flow chart that details directions for
determining whether to use a colon or dash in a sentence.
13. Mathematics: In a word problem, students develop a step-by-step process for how
they answered a question that can then be applied to similar problems.
14. Science: Students articulate how to classify elements in the periodic table.
15. Social Studies: Students describe a sequence of smaller events in history that
precipitated a much larger event.
16. Languages: Students apply new vocabulary and practice speaking skills to direct
another student to perform a task, whether it’s ordering coffee at a café or
navigating from one point in a classroom to another.
17. Arts: Students create instructions for drawing a picture that another student then
has to use to recreate the image.
The Ultimate Guide to Computational Thinking for Educators
Packed with plugged and unplugged examples, this guide will give you a foundational
understanding of computational thinking and the confidence to address this topic with
students.
Examples of Algorithms in Computer Science
These are obviously more elementary examples; algorithms – especially those used in
coding – are often far more intricate and complex. To contextualize algorithms
in computer science and programming, below are two examples.
Standardized Testing and Algorithms: Coding enables the adaptive technology often
leveraged in classrooms today.
For example, the shift to computer-based standardized tests has led to the advent of
adaptive assessments that pick questions based on student ability as determined by
correct and incorrect answers given.
If students select the correct answer to a question, then the next question is moderately
more difficult. But if they answer wrong, then the assessment offers a moderately easier
question. This occurs through an iterative algorithm that starts with a pool of questions.
After an answer, the pool is adjusted accordingly. This repeats continuously.
The Omnipotent Google and Algorithms: Google’s search results are determined (in
part) by the PageRank algorithm, which assigns a webpage’s importance based on the
number of sites linking to it.
So, if we consider using algorithm,’ we can bet that the chosen pages have some of the
most links to them for the topic ‘what is an algorithm.’ It’s still more complicated than
this, of course; if you are erested, this article goes into the intricacies of the PageRank
algorithm.
There are over 1.5 billion websites with billions more pages to count, but thanks to
algorithmic thinking we can type just about anything into Google and expect to be
delivered a curated list of resources in under a second. This right here is the power of
algorithmic thinking.
HOW COMPUTATIONAL THINKING RELATES TO OUR LIVES ESPECIALLY AS STUDENTS
The world and the global economy are changing around us. For the bulk of the 20th century,
we lived in a manufacturing economy — one in which the creation, distribution and sale of
goods created jobs. But the 21st century and its advances in technology have created an
information economy — one in which knowledge and skills are the currency rather than
goods produced in a factory or on an assembly line.
Because of this shift from a manufacturing to an information economy, today’s students
must learn differently and develop different skills than students of just a few years ago. In
the 20th century, a student could study and work hard, attend a university, and then count
on a steady, secure job related in some way to manufacturing. The goal of that job was
almost always to make things and to make them as efficiently and inexpensively as possible.
But, today, those manufacturing jobs are quickly disappearing. The economy is now driven
by information and knowledge, and jobs in all industries require more advanced skills —
skills that include the ability to think computationally:

Doctors will save more lives by optimizing the exchange of organs between donors and
recipients, as well as through advanced drug design that avoids the creation of
drugresistant disease strains.

Artists will develop new modes of human experience by applying the tools needed to
express themselves computationally.

Internet users will apply computational thinking to develop new services and
experiences.
How Does Computational Thinking Help Your Student?
Students who learn how to think computationally will be the ones who participate in these
new developments — and they will be the ones who enjoy steady, secure and lucrative
employment as other struggle through the transition from manufacturing to information.
Computational thinking offers students three main benefits:



Problem-solving skills: Students who learn how to think computationally are the ones
who will be able to overcome challenges and come up with solutions to complex
problems.
Creative thinking abilities: Students who learn how to think computationally are the
ones who will be able to research, gather and understand new information, and then to
apply that new information to issues and projects or all kinds.
Autonomy and confidence: Students who learn how to think computationally are the
ones who will feel comfortable working in groups as well as confident when forced to
take on a challenge independently.
Computational thinking is also helpful in any number of subjects that your student is
pursuing in school. It can help students explore new information and ideas, and it can be
universally applied — no matter what they’re interested in studying and no matter what line
of work they want to someday enter.
Google’s Computational Thinking for Educators curriculum indicates how computational
thinking is helpful to students outside of computer science, including in the subject areas of:




Literature: Computational thinking can help students break down and analyze poems
with regard to structure, tone, meter, imagery and more.
Economics: Computational thinking can help students identify patterns and cycles that
affect the rise and fall of a nation’s economy.
Mathematics: Computational thinking can help students develop a reflexive
understanding of difficult concepts, such as the rules for factoring second-order
polynomials.
Chemistry: Computational thinking can help students visualize the rules that govern
chemical bond and interactions.

Problem-solving skills: Students who learn how to think computationally are the ones
who will be able to overcome challenges and come up with solutions to complex
problems.

Creative thinking abilities: Students who learn how to think computationally are the
ones who will be able to research, gather and understand new information, and then to
apply that new information to issues and projects or all kinds.
Autonomy and confidence: Students who learn how to think computationally are the
ones who will feel comfortable working in groups as well as confident when forced to
take on a challenge independently.

Computational thinking is also helpful in any number of subjects that your student is
pursuing in school. It can help students explore new information and ideas, and it can be
universally applied — no matter what they’re interested in studying and no matter what line
of work they want to someday enter.
Google’s Computational Thinking for Educators curriculum indicates how computational
thinking is helpful to students outside of computer science, including in the subject areas of:


Literature: Computational thinking can help students break down and analyze poems
with regard to structure, tone, meter, imagery and more.
Economics: Computational thinking can help students identify patterns and cycles that
affect the rise and fall of a nation’s economy.


Mathematics: Computational thinking can help students develop a reflexive
understanding of difficult concepts, such as the rules for factoring second-order
polynomials.
Chemistry: Computational thinking can help students visualize the rules that govern
chemical bond and interactions.
Robotics Helps Students Develop Computational Thinking Skills
A strong connection exists between robotics and the development of computational thinking
abilities. MIT Professor Seymour Papert first discovered this connection in the 1960s when
he taught students to program a robotic turtle to make specific moves and take certain
actions.
Papert called the robotic turtles “objects(s) to think with.” His students would watch the
robotic turtle encounter an obstacle. The students would then imagine how they would
navigate around the obstacle. And, finally, the students would apply that solution to the
program, which connect robotics and computational thinking. For these early learners, the
exercises can be simple — just describing the actions of a robot as it demonstrates its
capabilities and range of motion can serve as a beginning in computational thinking. Once
this simple introduction is made, students can take on more and more complex exercises to
develop greater abilities.
HOW COMPUTATIONAL THINKING RELATES TO PHARMACY
Critical Thinking (CT) is one of the most desired skills of a pharmacy graduate because
pharmacists need to think for themselves, question claims, use good judgment, and make
decisions. It is needed in almost every facet of pharmacy practice because pharmacy
students need to evaluate claims made in the literature, manage and resolve patients’
medication problems, and assess treatment outcomes. While pharmacy educators may
agree that CT is an essential skill for pharmacy students to develop, it must be consistently
defined because the definition determines how it is taught and assessed. While many
definitions of CT exist, it is most commonly defined as automatically questioning if the
information presented is factual, reliable, evidence-based, and unbiased. In simpler terms, it
is reflecting on what to believe or do.
To operationalize the CT definition, six core CT skills have been proposed: interpretation,
analysis, evaluation, inference, explanation, and self-regulation (directing one's actions
automatically). Interpretation includes understanding and communicating the meaning of
information to others. Analysis includes connecting pieces of information together to
determine the intended meaning. Inference is recognizing elements of information one has
and using those elements to reach reasonable conclusions or hypotheses. Evaluation
involves making a judgment about the credibility of a statement or information. Explanation
includes adding clarity to information one shares so it can be fully understood by another
person. Self-regulation is the ability to control one’s own thoughts, behavior and emotions.
Besides the six core skills, CT is more than a stepwise process. It is a summation of attitude,
knowledge, and knowledge of the CT process (Attitude + Knowledge + Thinking Skills =
Critical Thinking). All three components are necessary. First, individuals need an attitude that
aligns with CT. This attitude includes a willingness to plan, being flexible, being persistent,
willingness to self-correct, being mindful and a desire to reconcile information. If the
attitude is not there, it is unlikely that the individual will engage in the actual process.
Second, CT requires knowledge or something to think about. The more knowledge the
individual has, the better their process and answer. Thus, acquiring foundational, requisite
knowledge is important in CT. The final part is the knowledge of the CT process. Knowing the
steps and following them is key to success. Not following the steps can lead to incorrect
answers. Skipping steps is one of the barriers to CT. When these three components are
present, CT can occur at a deep level.
While CT is used often, it is important to differentiate CT from other processes. Problem
solving, clinical reasoning and clinical decision-making are related higher-order CT skills and
while the terms may be used interchangeably, there are distinguishing features. Problem
solving is a general skill that involves the application of knowledge and skills to achieve
certain goals. Problem solving can rely on CT but it does not have to. The steps of identifying
a problem, defining the goals, exploring multiple solutions, anticipating outcomes and
acting, looking at the effects, and learning from the experience are all steps that can benefit
from eliminating assumptions or guesses during the problem-solving process. In comparison
to general thinking skills, clinical reasoning and clinical decision-making depend on a CT
mindset and are domain-specific skills that are used within pharmacy and other health
sciences. Clinical reasoning is the ability to consider if one’s evidence-based knowledge is
relevant for a particular patient during the diagnosis, treatment, and management
process. Clinical decision-making happens after the clinical reasoning process and is focused
on compiling data and constructing an argument for treatment based on the interpretation
of the facts/evidence about the patient. Overall, the process of thinking like an expert by
considering the evidence and making correct decisions about a patient to solve a patient’s
problems is a skillset that students should practice so it becomes automatic.
Barriers to Critical Thinking
There are several challenges to students thinking critically: perceptions, poor metacognitive
skills, a fixed mindset, heuristics, biases and because thinking is effortful. The first barrier is
students’ perceptual problem – students believe they know how to solve problems, so often,
they do not understand why they are being re-taught this skill. Educators teach students
how to monitor their thinking and become better problem solvers by giving them a
framework to be more thoughtful thinkers.
The next challenge is students’ weak metacognitive skills. The relationship between CT and
metacognitive skills has been noted in the literature. Metacognition refers to an individual’s
ability to assess his/her own thinking and actual level of skill or understanding in an area.
Metacognition helps critical thinkers be more aware of and control their thinking
processes. Students who are weak at metacognition jump to conclusions without evaluating
the evidence, thinking they know the answer, which ultimately interferes with CT.
A third reason CT is difficult for students is that they may have a fixed mindset or a belief
that their intelligence cannot change. If students believe CT is an innate skillset that occurs
naturally, they may not invest the effort to develop it because they believe that no matter
how hard they try, they will never get it.
Heuristics can get in the way of CT. Heuristics are our shortcuts to thinking – they are a
strategy applied implicitly or deliberately during decision-making where we use only part of
the information we might otherwise want or need. This results in decisions that are quicker
and less effortful because the individual may be using the best single piece of data to make a
more frugal approach. In a classic study, participants were asked, “If a ball and bat cost
$1.10, and the bat is $1 more than the ball, what was the cost of the ball?” The most popular
answer is $0.10, which is incorrect (the correct answer is the ball costs $0.05, the bat then is
$1.05 or $1 more. If the ball was $0.10, the bat is only $0.90 more than the ball). We take
cognitive shortcuts because thinking is effortful and if we can get a quick response that fits
our current needs, we will do it. Kahneman referred to two systems of thought: System 1
and System 2. System 1 is a fast decision-making system responsible for intuitive decisionmaking based on emotions, vivid imagery, and associative memory. System 2 thought
processes is a slow system that observes System 1’s outputs, and intervenes when
“intuition” is insufficient.
Another challenge that makes CT difficult for students is their inherent biases. One major
bias is confirmation bias or the tendency to search for information in a way that confirms
our ideas or beliefs. Confirmation bias happens because of an eagerness to arrive at a
conclusion, so students may assume they are questioning their assumptions when they are
only searching for enough information to confirm their beliefs. When we want to think
critically, we want the evidence against our view to better inform our decision.
CT is difficult and does not develop automatically. It takes practice and effort. Experts think
critically without conscious thought, which makes it effortless. However, developing
expertise is estimated to take 10 years or 10,000 hours (or more) of deliberate practice, so it
is a time consuming activity. In a study of thinking using the game Tetris, it was shown that
initial game learning resulted in higher brain glucose consumption compared to individuals
with experience playing and those watching someone play.Similar results are seen when
comparing experts to novices. Functional MRI studies show that experts use less of their
brain to solve a problem than novices, partly because a problem for a novice is not a
problem for an expert. It is experience that has led to muscle memory and heuristics.
Students do not have a lot of experience thinking critically and therefore, do not want to do
it because it is difficult and time consuming; they would rather do things that are automatic
and effortless.
Developing Critical Thinking Skills
Developing CT skills is difficult but not impossible. CT is a teachable skill and is often
discipline-specific because it relies on discipline-specific knowledge. Research and practice
suggest several factors that improve thinking: a thoughtful learning environment (eg,
integration), seeing or hearing what is actually done to executive cognitive operations one is
trying to improve (eg, model behavior), guidance and support of one’s efforts until he or she
can perform on their own (eg, scaffolding); and prompting to question what is thought to be
known (eg, challenging assumptions). These are general, key points that instructors can do
to help students develop CT skills.
Creating a thoughtful learning environment is not limited to just letting students make
mistakes. The first piece of this thoughtful learning environment is helping students to
integrate their knowledge. Integration allows students to build on previous experiences,
provide developmentally appropriate opportunities for the individual to produce optimal
performance, and lay a foundation for further development. By intentionally creating an
environment that allows students to integrate previous and current knowledge, they can
begin to evaluate how the concepts are related and make decisions on how to apply that
knowledge to future, and likely different, situations. Integration can take many forms and
does not necessarily mean courses need to be integrated or aligned in time. Integration can
take the form of integrating the cumulative knowledge gained over the curriculum.
Modeling expert thinking is another way to help students see CT in action and begin to use
these steps themselves. Instructors should verbalize their executive cognitive operations for
students to hear or see when addressing a problem or issue that requires CT. No single step
is too insignificant to point out. Learners are novice and assumptions should not be made
that they understand or know how to perform a seemingly simple set in the thinking
process. By watching the experts process information, learners begin to form those thinking
skills as well.
Scaffolding is another general method that can facilitate development of CT skills.
Scaffolding is a temporary support mechanism. Students receive assistance early on to
complete tasks, then as their proficiency increases, that support is gradually removed. In this
way, the student takes on more and more responsibility for his or her own learning. To
provide scaffolding, instructors should provide clear directions and the purpose of the
activity, keep students on task, direct students to worthy sources, and offer periodic
assessments to clarify expectations. This process helps to reduce uncertainty, surprise and
disappointment while creating momentum and efficiency for the student.
Thinking begins when our assumptions are violated. Driving to work requires little effort. We
do it all the time and sometimes we may wonder how we got to work because our thoughts
were elsewhere. On a daily basis, you assume your drive will be normal and unimpeded.
Now imagine there is traffic. You move from auto-pilot to thinking mode because your
assumptions were violated. When our assumptions are violated, we start to think and we
see this thought process as early as a few weeks from birth. In the classroom, we must
identify and challenge students’ assumptions. As an example from self-care instructors,
when students are asked to recommend a product for cough associated with the common
cold, any student pharmacist with community pharmacy experience may answer
“dextromethorphan.” This may be what they have seen in practice or what they received as
a child from their parents. They have experience in this context. However, this answer is not
supported by the guidelines, but the students will argue it is correct because of their
experience. The cognitive dissonance – not expecting something to happen that you thought
would – starts the cognitive thinking process. From an instructional standpoint, it may be
important to initiate the critical thinking process by having students make predictions on
outcomes and showing how their predictions may be correct or incorrect.
Developing CT requires a 4-step approach. The first step is explicitly learning the skills of CT.
The second is developing the disposition for effortful thinking. The third step is directing the
learner to activities to increase the probability of application and transfer of skills. The final
step is making the CT process visible by instructors making the metacognitive monitoring
process explicit and overt. These four steps should be included both at the broad curricular
level and down to the more discrete level of a lesson and a course.
Curriculum.
College has shown to increase CT skills when CT is measured through standardized
assessments of CT skills (four years of college = effect size of 0.6). While part of this growth
in college may be due to maturation and increase in knowledge, developing CT skills requires
curriculum-level coordination. Just like a military action will fail if the individual units do not
play their role, CT development will fail if individual units do not play their respective roles.
One way to develop CT skills is to use a two-fold approach. The first step is to have a course
in the curriculum that teaches the general thinking skill process and starts to develop the
dispositions. The second step is to have individual courses reflect that process within the
context of the subject matter. Ideally courses have explicit learning objectives and make the
thinking process equally as explicit; this is called the infusion method. Typically effect sizes
under 0.2 are considered small, over 0.4 are considered educationally significant, and over
0.7 are considered large. To note, these effect sizes come from a variety of study types,
durations and outcome measures. For example, one study in nursing used a standardized
assessment of CT (California Critical Thinking Skills Test) to compare lecture to problembased learning (PBL) in a pre/post design. Examining pre-to-post changes, PBL showed an
effect size of 0.42 whereas lecture was 0.010. When comparing the post-scores from PBL to
lecture, the effect size was: general, infusion, immersion and control. The outcome was a
rubric developed by the instructor and research team. Compared to control, the general,
infusion and immersion all showed positive and moderate-to-large effect sizes. Relatively,
infusion was better than general as was immersion with very little difference between
infusion and immersion.
Courses.
Within a course structure, collaborative learning (ie, peer teaching, cooperative learning)
helps develop CT more than other activities. One meta-synthesis that attempted to integrate
results from different but interrelated qualitative studies on critical thinking found an effect
size of 0.41 for promoting CT skills when collaborative learning was used. Collaborative
learning provides feedback to learners and puts learners in a setting that challenges their
assumptions and engages them in deeper learning to solve a problem. However, if learners
receive minimal guidance, they may become lost and frustrated or develop
misunderstandings and alternative understandings. Students’ CT improves most in
environments where learning is mediated by someone who confronts their beliefs and
alternative conceptions, encourages them to reflect on their own thinking, creates cognitive
dissonance or puzzlement, and challenges and guides their thinking when they are actively
involved in problem solving. This guided participation role may be implemented by learners
in structured activities with the guidance, support, and challenge of companions.
Lessons.
Individual lessons should be designed with CT in mind by intentionally providing learners
opportunities to engage in complex thinking. The goals of the activities should be made clear
and instructors should acknowledge that effortful thinking is required while recognizing that
the learning environment allows students to make mistakes. Instructors should explicitly
model their expert thinking and actively monitor how students are learning. Adjustments to
teaching should be made reactively as instructors notice trends in student thinking.
Providing enough time to think and learn during these activities is crucial.
Instructors.
While the curriculum structure can have a large effect, it relies heavily on the individual
instructor. Instructor training has been found to be the most effective intervention in
developing CT skills. This training, however, must go beyond having students observe others
think critically. This facilitation requires the appropriate material (eg, cases), facilitation skills
and mentoring skills. Though difficult, instructors should often remain silent during the
activity. When necessary, instructors can ask probing questions that require students to
clarify, elaborate, explain in more depth or ask more questions, which are related to
metacognition. Instructors can signal acceptance of the student’s assertions by
paraphrasing, providing a friendly facial expression, or writing responses for all to see. The
key is to facilitate learning and not “do” the learning for the students.
Recommendations
A common model for the process of CT should be used in each pharmacy school curriculum.
Ideally, a course should be required for all students early in the curriculum that addresses
the definition, common model, and dispositions of CT and then provides an opportunity for
students to actively practice these skills on general subject matter content. As students’
knowledge of pharmacy specific content grows, courses need to explicitly use the process
outlined in the general course with application to the subject specific content. The repetition
of these skills in multiple courses or course series will help students practice this skill.
Additionally, all instructors should learn the model taught to students and learn how to
create and facilitate activities to encourage CT in their content areas.
While there may be many templates for CT, we propose a 4-step cycle: generation,
conceptualization, optimization and implementation. In the generation phase, learners
identify the problem and find facts. This is followed by the conceptualization phase when
learners define the problem and draft ideas that could explain the defined problem. In the
optimization phase, learners evaluate and select an idea then design a plan. Finally, the
implementation phase involves accepting the plan and taking action. The cycle restarts with
finding a new problem. For example, during a patient encounter, a learner would enter the
generation phase, find all the problems and facts (laboratory values, past medical history,
etc.). Then the learner would define the problem(s) and generate ideas as to why the
problems are occurring. For example, the patient is complaining of fatigue and the learner
would have to come up with reasons why fatigue might occur (anemia, lack of sleep,
pregnancy, poor diet). The learner then uses the facts to evaluate each potential cause and
consider what further tests may be necessary to exclude some of the potential causes. After
selecting the cause, the learner formulates a plan and decides his or her next action. Once
the learner discovers the patient is anemic, the cycle restarts with treatment options. This
cycle can be used along with the Joint Commission of Pharmacy Practitioners Pharmacists'
Patient Care Process.
CONCLUSION
Critical thinking skills (interpretation, analysis, evaluation, inference, explanation, and selfregulation) are important for health care providers, including pharmacists. While some
students and instructors may think that CT skills are fixed, CT can be developed and
augmented through a process of attitude alignment, absorption of knowledge, and learning
new thinking skills. CT is also developed when one learns to combat potentially hazardous CT
roadblocks such as bias, heuristics (thinking shortcuts), and simply not wanting to go through
the effort of thinking on a higher level. Pharmacy educators can foster the development of
CT skills in the wide scope of curricular design, in the narrowest interactions between
professor and student, and everywhere in between. It is important to note that the methods
described in this paper do not have to be added to an already compressed curriculum but
rather can be used with existing materials to cover the content in a deeper and more
meaningful way. By modeling expert thinking and using scaffolding techniques to support
students’ CT development, pharmacy educators can instill both the desire and the drive for
students to begin thinking critically. Regardless, it is noteworthy to point out that teaching
CT skills requires time and effort at the potential expense of other skills. Thus, gains in
critical thinking during a PharmD curriculum may be a function of our need to develop a
multitude of other skills like teamwork, empathy, adaptability and communication.
REFERENCES
1. Anna McVeigh-Murphy (September 25, 2019)
2. Wang, Paul S. From computating to computational thinking (2016)
3. Wing Jeannett (2014)
4. Denning, P.J and tedre, M. (2019)
5. Wing, Jeannette M.(28 October 2008)
Download