Uploaded by david052200

ai analysis hw1

advertisement
AI-based Energy Management
:
Hanyang University
Homework 1 (Due by April/14)
Please upload your solution (pdf ) in LMS website
1. (chapter-1 Nearest Neighbor) Assess the advantages and disadvantages of the nearest neighbor
algorithm.
advantages'
ho
desadvaafuges
required
trainang
,
easy
dastaucemefracson
"
foimplement
,
slow affesffaane
rofamformafioe ferg
9xelsare
P
,
2. (chapter-1 Linear Classifier ) Explain the meaning of weight (W ) and bias (b) in the linear classifier
method f (x, W ) = W x + b in high dimensional space?
-
N
they
defane
haperplane
a
in
.
dafferent
danensqonal
hagh
classes of data poants
space
( 가중치차
)
derides
wqhach
Emput
data 를
안되기
bl편향 ]
"
다음
위해서
노드로 넘길때
같은 값이
곱해꿈
조정값값
3. (chapter-2 Neural Network and backpropagation ) Consider a function f (x, y, z) = (x + y) max(y, z)
and x = 2, y = 3, z = 1.
3
(a) Draw computational graph
madA)
타리나
a
(b) compute weights in forward propagation steps
(c) compute gradients in back propagation steps
=
Note: Refer examples in lecture 2
)
[
zantk
CJ
. = 분 xb =
15
는 나눈 × AXa
f
=
x5
+
낮
xxN frab
A
bn
:
yyafsy
=
=
X3
=
5
=
m
해 사이+ 문업 =
.
.
충한 = ×
.
5xo
=
운
분
평합 분다
α
=
5
=
조사
XltBX (
=
8
yrz
b
f ty
=
0
=α
4. (chapter-3 ConvNet) In a convolutional layer, given an input volume of 64 ⇥ 64 ⇥ 3, apply 16 4 ⇥ 4
filters with stride 2 and pad 2, what are the output size and number of parameters?
eutputheaght
output wadth
:
-
output
SKze
[
Ipad- f(fferheaghty /stradef / L 64522
oufputdelsfh number of filters = 6
=
faputheaghff
=
gh
B 3 XBBh ( 6
=
-
4)
개
=
m
=
<
number of
Paramatavs
.
4 4
)
× B
+~
x
-시
6
xfaltarwadthxamputdptht
falHerheaght
= 74
5. (chapter-3 ConvNet) Write a code from scratch to perform 2d convolution using numpy library and
python.
Input: 2d image array with any size, kernel: 2d kernel with any size with odd number and
smaller than input image
Output: Convoluted image with the same shape as input (zero paddings)
기울기 소실 활성화함수 (tanh) 의 도함수는 모두 기울 시
:
6. (Chapter 4 -RNNs) Explain why vanilla Recurrent Neural Networks (RNNs) cannot remember
long-term information and how this issue can be addressed.
As fhe
because
fame
-
step
longer
gets
flhegradcentsuised
to
.
prevrons
faformafhon
updafefheweaghts6
could
be
lossed
durang backpwpagafhon
ftheR(NlV
can
become
) step
malfapliad by fha deravafava of fhe acfqvafhon fuacfson af
fhr
.
7. (Chapter 4 - RNNs) Explain how we can address the vanishing gradient problem of the vanilla
RNNs?
로 작동한다
RNN 은 1 와 한정로 연산하지만 LETM은4 개의
small
very
as
are
ach
layer
ag
LSTM
ht
cell
sfafe
oufpat
만
마미
*
1
Q
nput
g
at
e
□
1
.
최가야
다
.
nexfcallsfate
아까다
XT
haddea
.
state
capat
5.
세
↓
.
.
.
+ .
i
Download