專題演講 主講人:李靜沛教授(中研院統計所)

  • 事件日期: 2021-03-12
  • 演講者:  /  主持人:
This is an image


題 目:Improving Proximal-Newton-Type Methods for Regularized Optimization

主講人:李靜沛教授(中研院統計所 )

時 間:110年3月12日(星期五)上午10:40-11:30

(上午10:20-10:40茶會於交大統計所428室舉行) 

地 點:交大綜合一館427室

摘要
 
For regularized optimization that minimizes the sum of a differentiable term and a regularizer that promotes structured solutions, inexact proximal-Newton-type methods, or successive quadratic approximation (SQA) methods, are widely used for their superlinear convergence in terms of iterations. This talk discusses our recent improvements of SQA in both theory and practice and consists of two parts. In the first part, we discuss its superlinear behavior for degenerate problems. 
In particular, we show that superlinear convergence and even quadratic convergence is still attainable without a positive definite Hessian at the solutions, and even without twice-differentiability of the smooth term. We then propose a simple variant that is better than existing ones, in the sense that in addition to the minimum requirement of ensuring both global convergence and local superlinear convergence, the proposed variant also guarantees that the function value is always improving. In the second half of this talk, we consider further improvements SQA by utilizing the solution structure. More specifically, We first show that for partly smooth regularizers, although general inexact solutions cannot identify the active manifold that makes the objective function smooth, approximate solutions generated by commonly-used subproblem solvers will identify this manifold, even with arbitrarily low solution precision. We then utilize this property to propose an improved SQA method that switches to efficient smooth optimization methods after this manifold is identified. We show that for a wide class of degenerate solutions, the proposed method possesses superlinear convergence not just only in iterations, but also in running time because the cost per iteration is bounded. In particular, our superlinear convergence result holds on problems satisfying a sharpness condition more general than that in existing literature. Experiments on real-world problems also confirm that our method greatly improves the state of the art for regularized optimization. 
 


使用Google Meet線上直播,以講者簡報畫面加聲音方式
演講開始前20分鐘可進入會議,請點選下列連結後按下「要求加入」即可
 

This is an image