H. Moga, L. Miron
1)
, and Gh. Pana
2)
Electronics and Computers Department, Transilvania University, Brasov, Romania,
1)
Electronics and Informatics Department, “Henri Coanda” Air Force Academy, Brasov,
Romania,
2)
Electronics and Computers Department, Transilvania University, Brasov, Romania
Email: horatiu.moga@gmail.com
Topic: B
Presentation: P
The purpose of our study is construction of probabilistic classifier for signal detection. Applications of this classifier could be in various areas like surveillance, biomedical equipments, automation with machine vision, military technology, etc. Naïve Bayes classifier are used like method for learning machine, we try to determine the posterior probabilities for our approach.
probabilistic classifier, Bayes, signal detection, Op Amp
We try and build a classifier which will predict whether a ball is red or blue based on their measured size alone. We have two groups of balls red and blue and border between them. A white ball (could be red or blue) could be in one o groups. Depend by the size to evaluate the affiliation.
Fig. 1: Red balls and blue balls are separated by the size.
Class Priors - The class variable C will take on two values so we can encode red balls by the value 1 and blue balls by the value 0. [1] Now within the general population there is an approximate equal number of red balls and blue balls. In that case the probability of class red balls occurring will be defined simply as P
C
1
and the probability of class blue balls
occurring will be P
C
0
. Now these probabilities are set prior to making any measurements and hence are called the prior probabilities of class membership.
Class Conditional Likelihood - Now we have a ball (plotted white), randomly selected from the population, and we make a measurement of its size. Now there will be a natural distribution of the size of red balls and blue balls, so in other words there will be a class conditional distribution of the measured features, in this case size. We can write these class conditional distributions as p
s | C
1
and p
s | C
0
form red balls and blue balls classes respectively.
Class Posterior - Now from Bayes rule which we met last week we can obtain the posterior probability of class membership by noting that
P
s , C
1
s | C
1
C
1
C
1 | s
and also p ( s ) (1)
P
C
1 | s
p
s | C
1
C
1
p ( s ) and the marginal likelihood of our measurement, p
, is the probability of
(2) measuring a size s irrespective of the class and so p
s | C
1
C
1
s | C
0
C
0
which means that the class posteriors will also sum to one,
0 | s
C
1 | s
1
(3)
P
C
Discriminant Functions [2]- From Fig.1 we can see the empirical distributions of size for both reds and blues. The first thing to notice is that there is a distinct difference in the location of the distributions and they can be separated to a large extent (suppose that reds are typically bigger than blues). However there is a region where the two distributions overlap and it is here that classification errors can be made. The region of intersection where P
C
1 | s
C
0 | s
is important as it defines our decision boundary. If we make a measurement of see that P
C
1 | s
C
0 | s
s
0 size then we can
and whilst there is some probability that we have measured a rather bigger blue, to minimize the unavoidable errors that will be made then our decision should be based on the largest posterior probability. We can then define a discriminant function based on our posterior probabilities one such function could be the ratio of posterior probabilities for both classes. If we take the logarithm of this ratio then the general discriminant function f
log
P
P
C
C
1
0
|
| s s
would define the rules that s would be assigned to assignment would be to C
0 (female).
C
1 (male) if f
0 and if f
(4)
0 the
Discriminative and Generative Classifiers - There are two ways in which we can define our discriminant function. [3] In the first case we can explicitly model our discriminant function using for example a linear or nonlinear model. This is often referred to as the discriminative approach to defining a classifier as all effort is placed on defining the overall discriminant function with no consideration for the class-conditional densities which form the discriminant.
The second way is to focus on estimating the class-condition densities (distributions if the features are discrete) p
s | C
1
and p
s | C
0
and then use these estimates to define our posterior probabilities and hence our discriminant function. As the class-conditional densities define the statistical process which generates the features we measure then this approach is often referred to as the generative approach.
We consider the generative case with two classes. The detector support Gauss distribution for both of them. The balls size is in first class if ray is smaller 2.25 and bigger 2.75 and second class other else. For first class we calculated the mean µ1=2.5 and standard deviation ϭ1=0.88.
The second class mean is µ2=0.91 and standard deviation ϭ2=3.6. The schematic is above:
R1 R2
0
Vz
2
0
Vmiu1
10k
2.5
0
R3
10k
R7
10k
R5
0
10k
Vmiu2
3.6
V-
10k
TL082
2
-
OUT
3
R4
10k
+
8
U1A
V+
V+
1
3
U9A
V+
+
OUT
2
-
TL082
4
V-
R9
2Meg
1
Vsigma1
-1.76
R8
10k
U2B
V+
5
+
OUT
6
-
TL082
4
V-
V-
R6
7
10k
0
5
U9B
V+
+
6
-
TL082
4
V-
Vsigma2
-1.82
OUT
7
0
0
0
U3
3
4
5
1
2
6
7
X1
X2
U0
U1
U2
Y 1
Y 2
VP
DD
W
Z1
Z2
ER
VN
AD734/AD
3
4
5
1
2
6
7
U4
X1
X2
U0
U1
U2
Y 1
Y 2
AD734/AD
VP
DD
W
Z1
Z2
ER
VN
14
13
12
9
8
11
10
14
13
12
11
10
9
8
V+
Q1
Q2N2222
3
0
2
V+
U10A
-
+
TL082/301/TI
1
OUT
4
V-
V-
R13
V+
3
4
1
2
6
U6
X1
X2
Y 1
Y 2
Z
AD633/AD
W
7
0
0
3.9k
0
V-
0
V-
R11
1T
0
0
V+
Q2
Q2N2222
5
0
6
U5B
V+
-
+
TL082/301/TI
OUT
7
4
V-
V-
R14
3.9k
0
VP2
5V
0
V+
1
2
3
4
6
U7
X1
X2
Y 1
Y 2
Z
W
7
AD633/AD
V-
V-
VP1
7V
0
R12
1T
U8
+
V+
OUT
V+
6
V-
V-
LM311/301/TI
V+
Vout
0
R15
4.7k
V1
15V
V2
15V
0
0
We suppose the uniform distribution for both classes 0.7 for first and 0.5 for second one.
The outcome show us that it verify the initial hypotheses. The transition band between (2.25, 2.5) and (2.5, 2.75) decide if white ball is red or blue, and if it is in one class or other one.
15V
10V
5V
0V
0V
V(out)
0.5V
1.0V
1.5V
2.0V
2.5V
3.5V
4.0V
4.5V
5.0V
5.5V
6.0V
3.0V
V_Vz
[1] Christopher M. Bishop,“Pattern Recognition and Machine Learning”, Springer Science+Business
Media, LLC, pg. 181, 2006.
[2] Andrew Webb, “Statistical Pattern Recognition”, Second Edition, John Wiley & Sons Ltd., pp.
123-180, 2000.
[3] Sebe N., Ira Cohen, Ashutosh Garg, Thomas S. Huang, “Machine Learning in Computer Vision”,
Springer Science+Business Media, LLC, pp. 71, 2005.