• search hit 11 of 11
Back to Result List

On the Convergence Rate of l(p)-Norm Multiple Kernel Learning

  • We derive an upper bound on the local Rademacher complexity of l(p)-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p - 1 only while our analysis covers all cases 1 <= p <= infinity, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order O( n(-)1+alpha/alpha where alpha is the minimum eigenvalue decay rate of the individual kernels.

Export metadata

Additional Services

Share in Twitter Search Google Scholar Statistics
Metadaten
Author:Marius Kloft, Gilles BlanchardGND
ISSN:1532-4435 (print)
Parent Title (English):JOURNAL OF MACHINE LEARNING RESEARCH
Publisher:MICROTOME PUBL
Place of publication:BROOKLINE
Document Type:Article
Language:English
Year of first Publication:2012
Year of Completion:2012
Release Date:2017/03/26
Tag:generalization bounds; learning kernels; local Rademacher complexity; multiple kernel learning
Volume:13
Pagenumber:38
First Page:2465
Last Page:2502
Funder:German Science Foundation [DFG MU 987/6-1, RA 1894/1-1]; World Class University Program through the National Research Foundation of Korea; Korean Ministry of Education, Science, and Technology [R31-10008]; European Community [247022]; European Community&rsquo;s 7th Framework Programme under the PAS-CAL2 Network of Excellence [ICT-216886]