Probably Approximately Correct Learning Yongsub Lim Applied Algorithm Laboratory KAIST Definition • A class C is PAC learnable by a hypothesis class if there is an algorithm such that 0, 0, c C, D over X m poly(1 / ,1 / ) , # of i.i.d. training examples sampled from D , such that Pr[errD ( x) ] 1 where h is an output of Probably Approximately Correct Learning 2 Example • Consider the class space C which is the set of all positive half-lines • An example is any real number • Eg) 1 if x kc c( x) 0 if x kc 1 0 kc Probably Approximately Correct Learning 3 Proof. is PAC learnable • C is PAC learnable by C D([kc , k c )) R kc kc kc Probably Approximately Correct Learning 4 Proof. is PAC learnable • Our algorithm outputs a hypothesis h such that kh maxx : c( x) 0, minx : c( x) 1 • Suppose kh k c errD ( x) for a positive example x errD ( x) 0 for a negative example x kc kc kh Probably Approximately Correct Learning kc 5 Proof. is PAC learnable • Suppose kh k c , and it called b b only occurs if no training example x R Prx R 1 Prx1 R xm R 1 , xi ' s are i.i.d. m Prb 1 em m Prb b 2em kc R kc kc Probably Approximately Correct Learning kh 6 Proof. is PAC learnable • Prb b 2e • PrerrD ( x) 1 2em m Pr err ( x ) 1 2 e 1 • D m R kc kc kc Probably Approximately Correct Learning kh 7 Proof. is PAC learnable m 1 ln 2 1 2 ln m Probably Approximately Correct Learning 8 Proof. is PAC learnable • The class C is PAC learnable by itself 1 2 with at least ln training examples Probably Approximately Correct Learning 9 A More General Theorem • If h H , H such thath is consistentwith m independent randomlabeled trainingexamples, thenfor any , 0 we can assert with confidence 1 that therrorof h is less than providedthat: m ln ln1 Probably Approximately Correct Learning 10 Thanks Probably Approximately Correct Learning 11