English

On Convergence Rate of Concave-Convex Procedure

By  





Concave-Convex Procedure (CCCP) has been widely used to solve nonconvex d.c.(difference of convex function) programs occur in learning problems, such as sparse support vector machine (SVM), transductive SVM, sparse principal com- ponenent analysis (PCA), etc. Although the global convergence behavior of CC- CP has been well studied, the convergence rate of CCCP is still an open problem. Most of d.c. programs in machine learning involve constraints or nonsmooth objective function, which prohibits the convergence analysis via differentiable map. In this paper, we approach this problem in a different manner by connecting CC- CP with more general block coordinate decent method. We show that the recent convergence result of coordinate gradient descent on nonconvex, nonsmooth problem can also apply to exact alternating minimization. This implies the convergence rate of CCCP is at least linear, if in d.c. program the nonsmooth part is piecewise-linear and the smooth part is strictly convex quadratic. Many d.c. programs in SVM literature fall in this case.
Find OpenCourseWare Online Exams!
Attribution: The Open Education Consortium
http://www.ocwconsortium.org/courses/view/7cf1e3f275edf3b949bdb6875614f080/
Course Home http://videolectures.net/nipsworkshops2012_yen_procedure/