EE-538: NEURAL NETWORKS HOMEWORK 3 MARYAM ASAD STUDENT ID: 20225384 SCHOOL OF EE KAIST Homework 3 Question 1 Given at end handwritten. Question 2 Used Platform: MATLAB Part b of Homework 2 Question 2 by Homework 2 algorithm The homework 2 algorithm was the algorithm on last slide of lecture 2. It did not include any normalization therefore the weight vector and thus y grew very large. The data for pat b of question 2 (N1=0, N2=200, N3=200) is shown in figures 1 and 2; in 2D and 3D respectively. The first principal component obtained from this algorithm is shown in figure 3. Figure 1 Figure 2 Figure 3 Part b of Homework 2 Question 2 by Homework 3 algorithm The same question with same data is solved by algorithm with normalization on slide 3-3 of lecture 3. The principal component thus obtained is shown in figure 4. The y axis shows smaller values as compared to that of algorithm without normalization. However, in both cases it remains finite. The weight vector for algorithm without normalization has larger magnitude than that of without normalization. Figure 4 Part d of Homework 2 Question 2 by Homework 2 algorithm The data for part d of question 2 (N1=0, N2=200, N3=200) is shown in figures 5 and 6; in 2D and 3D respectively. The first principal component obtained from this algorithm is shown in figure 7. The homework 2 algorithm was the algorithm on last slide of lecture 2. It did not include any normalization therefore the weight vector and thus y grew very large as shown in figure 7. Figure 5 Figure 6 Figure 7 Part d of Homework 2 Question 2 by Homework 3 algorithm The same question with same data is solved by algorithm with normalization on slide 3-3 of lecture 3. The principal component thus obtained is shown in figure 8. The y axis shows smaller values as compared to that of algorithm without normalization in figure 7. However, in both cases it remains finite. The weight vector for algorithm without normalization has larger magnitude than that of without normalization. Figure 8 Question 3 Part a The generated data is as follows in figures 9 and 10 in 2D and 3D respectively. N1=N2=300 and c1=c2=1. Figure 9 Figure 10 Part c The generated data is as follows in figures 11 and 12 in 2D and 3D respectively. N1=N2=300 and c1=1, c2=2. Figure 11 Figure 12 Part e The generated data is as follows in figures 13 and 14 in 2D and 3D respectively. N1=200, N2=300 and c1=c2=1 Figure 13 Figure 14 Part b The decision boundary by Single Layer Perceptron is shown in figure 15. N1=N2=300 and c1=1, c2=1. Eta is 0.0005 for all parts. Figure 15 Part d The decision boundary by Single layer perceptron is shown in figure 16. Standard deviation of B is greater than class A hence B are more spread out. N1=N2=300 and c1=1, c2=2 Figure 16 Part f The decision boundary by Single layer perceptron is shown in figure 17. Red dots are less because they are 20 while blue are 300. N1=200, N2=300 and c1=c2=1 Figure 17