home1-09

advertisement

Dr. Eick

COSC 6342

“Machine Learning”

Assignment1 Spring 2009

Due: Tuesday February, 24, 11p (electronic Submission) or 5 days after the topic was discussed in the lecture, whichever comes later; problem 5 is due on March 10, 11p

(submit report and software!).

Remark: Problem 6 was updated on March 1, 2008.

1.

G---Topic1

Compare reinforcement learning and supervised learning! What are the main differences between these approaches? Limit your answer to 4-6 sentences!

2.

G---Topic2.

Derive the solution for w

0

and w

1

at the bottom of page 30 of the textbook!

3.

G---Topic3

Solve the problem on transparency 9 of Topic3!

4.

G---Topic4

Formula 1.4 on page 10 of the Bishop uses regularization for determining the best weights in curve fitting. What is the motivation to use regularization? How is it used in the particular approach presented? What alternative methods could be used to accomplish the goals of regularization?

5.

G---Topic5

In this task you will develop a binary classifier (

 2 

{0,1}), called MTG (“ m ixture of t wo G aussians”), that relies on a parametric approach that uses a mixture of two bivariate Gaussians to approximate the density of each class. The classifier will be evaluated and enhanced on an Arsenic dataset (raw dataset source: http://www.tlc2.uh.edu/dmmlg/Datasets/arsenic.txt

) and on an artificial dataset using two-fold cross validation that is repeated 3 times.

Fig. 1: Arsenic Water Well Problem (Longitude

Latitude

{safe, dangerous})

We assume having two classes C and C’ and a density function is associated with each class that assesses density using a mixture of two Gaussians G1 and G2. In particular:

G1,C

~ N(

G1,C

,

G1,C

)

G2,C

~ N(

G2,C

,

G2,C

)

G1,C’

~ N(

G1,C’

,

G1,C’

)

G2,C’

~ N(

G2,C’

,

G2,C’

)

1

Because the classifier operates in

 2 ,

G,C

=(x

G,C

, y

G,C

) and

G,C

={

11,G,C

,

12,G,C

,

22,G,C

}

The mixture density function for each class is defined as follows:

C

C’

:=

C

:=

C’

*

*

G1,C

+ (1-

C

)



G2,C

G1,C’

+ (1-

C’

)



G2,C’ with

C

,

C’ being “mixture” parameters in [0,1].

Finally, the MTG classifier assigns a class as follows to an example y

 2 :

IF

*

C

(y)>

C’

(y) THEN C ELSE C’ with

being a parameter (default

= |C|/|C’|).

In summary, a MTG classifiers has 23 parameters (4 Gaussians each of which is characterized by 5 parameters,

C

,

C’

and

).

The goal of the project to develop procedures and techniques to derive a MTG-classifier from a training dataset and to evaluate its performance using two-fold cross validation.

For the purpose of the project each dataset D is subdivided (using class-stratified sampling) into 3 sets: D

1

, D

2

, and D

P

. : D

1

and D

2 serve as training and test sets for learning the classifier and for two-fold cross-validation; D

P

serves as a optional validation set in case that your methods require a validation set.

Write and submit a report that describes your approach to derive a MTG-classifier for a dataset, reports the results of the experimental evaluation of your methods, and gives a brief history of the project. Moreover, be prepared to demo your system!

6.

G---Topic 5 a) Assume we have a dataset with 3 attributes and the following covariance matrix

:

4 0 0

0 9 -2

0 -2 1

Compute the Mahalanobis distance between the vectors (1,1,0), (1,0,1) and (0,1,1). Also compute the Mahalanobis distance between (1,1,-1) and the three vectors (1,0,0), (0.1.0).

(0,0,-1). How do these results differ from using Euclidean distance? Try to explain why particular pairs of vectors are closer/futher away from each other when using

Malahanobis distance. What advantages do you see in using Malhanobis distance of

Euclidean distance?

7.

G---Topic 5

Given an example for which the model parameter estimation using maximum likelihood

(ML) differs from the estimate obtained by using maximum a posterior (MAP).

8. G---Topic 7 a) Derive the storage and run-time complexity of K-means based on the following input parameters: k is the number of clusters

2

d is the number of attributes in the dataset n is the number of objects to be clusters t is the number of iterations K-means takes

Justify your result! b) Problem3 on page 150 of the textbook!

3

Download