编辑: 喜太狼911 2019-07-16

s a natural, commonly studied, important model.Angluin and Kharitonov [AK91] showed that membership queries do not help in PAC-learning DNF. Uniform distribution with queries However, membership queries are helpful in learning under the uniform distribution. Uniform Distribution learning with membership queries is the easiest model we'

ve seen thus far, and in it there is finally some progress!Mansour [Man92], building on earlier work of Kushilevitz and Mansour [KM93], gave an algorithm in this model learning DNF in time nlog log n. Fourier based.Finally, in

1994 Jackson [Jac94] produced the celebrated Harmonic Sieve, a novel algorithm combining Fourier analysis and Freund and Schapire'

s boosting to learn DNF in polynomial time. Random Walk model Recently we'

ve been able to improve on this last result. [Bshouty-Mossel-O-Servedio-03] considers a natural, passive model of learning of difficulty intermediate between Uniform Distribution learning and Uniform Distribution learning with membership queries:In the Random Walk model, examples are not given i.i.d. as usual, but are instead generated by a standard random walk on the hypercube. The learner'

s hypothesis is evaluated under the uniform distribution.It can be shown that DNF are also learnable in polynomial time in this model. The proof begins with Jackson'

s algorithm and does some extra Fourier analysis. The current picture Learning model Time source PAC learning (distributional) 2O(n1/3log2n) [KS01] Uniform Distribution nO(log n) [Ver90] Random Walk poly(n) [BMOS03] Uniform Distribution + Membership queries poly(n) [Jac94] EASIER Poly time under uniform? Perhaps the biggest open problem in DNF learning (in all of learning theory?) is whether DNF can be learned in polynomial time under the uniform distribution.One might ask, What is the current stumbling block? Why can'

t we seem to do better than nlog n time?The answer is that we'

re stuck on the junta-learning problem.Definition: A k-junta is a function on n bits which happens to depend on only k bits. (All other n-k coordinates are irrelevant.) Learning juntas Since every boolean function on k bits has a DNF of size 2k, it follows that the set of all log(n)-juntas is a subset of the set of all polynomial-size DNF formulas. Thus to learn DNF under uniform in polynomial time, we must be able to learn log(n)-juntas under uniform in polynomial time. The problem of learning k-juntas dates back to Blum and Blum and Langley [B94, BL94]. There is an extremely naive algorithm running in time nk C essentially, test all possible sets of k variables to see if they are the junta. However, even getting an n(1-Ω(1))k algorithm took some time… Learning juntas [Mossel-O-Servedio-03] gave an algorithm for learning k-juntas under the uniform distribution which runs in time n.704k. The technique involves trading off different polynomial representations of boolean functions.This is not much of an improvement for the important case of k = log n. However at least it demonstrates that the nk barrier can be broken.It would be a gigantic b........

下载(注:源文件不在本站服务器,都将跳转到源网站下载)
备用下载
发帖评论
相关话题
发布一个新话题