Details of Research Outputs

TitleA unified adaptive tensor approximation scheme to accelerate composite convex optimization
Author (Name in English or Pinyin)
JIANG, B.1; LIN, T.2; ZHANG, S.3,4
Date Issued2020-10-08
Source PublicationSIAM Journal on Optimization
ISSN10526234
DOI10.1137/19M1286025
Education discipline科技类
Published range国外学术期刊
Volume Issue Pages卷: 30 期: 4 页: 2897-2926
References
[1] A. A. Ahmadi, A. Olshevsky, P. A. Parrilo, and J. N. Tsitsiklis, NP-hardness of deciding convexity of quartic polynomials and related problems, Math. Program., 137 (2013), pp. 453-476.
[2] Y. Arjevani, O. Shamir, and R. Shiff, Oracle complexity of second-order methods for smooth convex optimization, Math. Program., 178 (2019), pp. 327-360.
[3] M. Baes, Estimate Sequence Methods: Extensions and Approximations, Institute for Operations Research, ETH, Zürich, Switzerland, 2009.
[4] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), pp. 183-202, https://doi.org/10.1137/080716542.
[5] A. S. Berahas, R. Bollapragada, and J. Nocedal, An investigation of Newton-Sketch and subsampled Newton methods, Optim. Methods Software, 35 (2020), pp. 661-680.
[6] R. Bollapragada, R. H. Byrd, and J. Nocedal, Exact and inexact subsampled Newton methods for optimization, IMA J. Numer. Anal., 39 (2019), pp. 545-578.
[7] E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos, and Ph. L. Toint, Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models, SIAM J. Optim., 26 (2016), pp. 951-967, https://doi.org/10.1137/15M1031631.
[8] E. G. Birgin, J. L. Gardenghi, J. M. Martínez, S. A. Santos, and Ph. L. Toint, Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models, Math. Program., 163 (2017), pp. 359-368.
[9] B. Bullins and R. Peng, Higher-Order Accelerated Methods for Faster Nonsmooth Optimization, preprint, https://arxiv.org/abs/1906.01621, 2019.
[10] S. Bubeck, Q. Jiang, Y. T. Lee, Y. Li, and A. Sidford, Near-Optimal Method for Highly Smooth Convex Optimization, preprint, https://arxiv.org/abs/1812.08026, 2018.
[11] R. H. Byrd, J. Nocedal, and F. Oztoprak, An inexact successive quadratic approximation method for l1 regularized optimization, Math. Program., 157 (2016), pp. 375-396.
[12] L. Calatroni and A. Chambolle, Backtracking Strategies for Accelerated Descent Methods with Smooth Composite Objectives, preprint, https://arxiv.org/abs/1709.09004, 2017.
[13] Y. Carmon and J. Duchi, Gradient Descent Efficiently Finds the Cubic-Regularized Nonconvex Newton Step, preprint, https://arxiv.org/abs/1612.00547v2, 2016.
[14] C. Cartis, N. I. M. Gould, and P. L. Toint, Adaptive cubic regularisation methods for unconstrained optimization, part I: Motivation, convergence and numerical results, Math. Program., 127 (2011), pp. 245-295.
[15] C. Cartis, N. I. M. Gould, and P. L. Toint, Adaptive cubic regularisation methods for unconstrained optimization, part II: Worst-case function-and derivative-evaluation complexity, Math. Program., 130 (2011), pp. 295-319.
[16] C. Cartis, N. I. M. Gould, and P. L. Toint, Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization, Optim. Methods Software, 27 (2012), pp. 197-219.
[17] C. Cartis, N. I. M. Gould, and P. L. Toint, On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization, SIAM J. Optim., 22 (2012), pp. 66-86, https://doi.org/10.1137/100812276.
[18] C. Cartis, N. I. M. Gould, and Ph. L. Toint, Improved Second-Order Evaluation Complexity for Unconstrained Nonlinear Optimization Using High-Order Regularized Models, preprint, https://arxiv.org/abs/1708.04044, 2017.
[19] C. Cartis, N. I. M. Gould, and Ph. L. Toint, Second-order optimality and beyond: Characterization and evaluation complexity in convexly constrained nonlinear optimization, Found. Comput. Math., 18 (2018), pp. 1073-1107.
[20] C. Cartis, N. I. M. Gould, and Ph. L. Toint, Universal regularization methods: Varying the power, the smoothness and the accuracy, SIAM J. Optim., 29 (2019), pp. 595-615, https://doi.org/10.1137/16M1106316.
[21] X. Chen, Ph. L. Toint, and H. Wang, Complexity of partially separable convexly constrained optimization with non-Lipschitzian singularities, SIAM J. Optim., 29 (2019), pp. 874-903, https://doi.org/10.1137/18M1166511.
[22] X. Chen and Ph. L. Toint, High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms, Math. Program., to appear; published online Jan. 28, 2020, https://doi.org/10.1007/s10107-020-01470-9.
[23] J. Duchi, E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res., 12 (2011), pp. 2121-2159.
[24] A. Gasnikov, P. Dvurechensky, E. Gorbunov, E. Vorontsova, D. Selikhanovych, C. A. Uribe, B. Jiang, H. Wang, S. Zhang, S. Bubeck, Q. Jiang, Y. T. Lee, Y. Li, and A. Sidford, Near optimal methods for minimizing convex functions with Lipschitz p-th derivatives, in Proceedings of the Thirty-Second Conference Conference on Learning Theory (COLT), Phoenix, AZ, Proc. Mach. Learn. Res. 99, 2019, pp. 1392-1393.
[25] S. Ghadimi, G. Lan, and H. Zhang, Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Math. Program., 155 (2016), pp. 267-305.
[26] S. Ghadimi and G. Lan Accelerated gradient methods for nonconvex nonlinear and stochastic programming, Math. Program., 156 (2016), pp. 59-99.
[27] S. Ghadimi, H. Liu, and T. Zhang, Second-Order Methods with Cubic Regularization under Inexact Information, preprint, https://arxiv.org/abs/1710.05782, 2017.
[28] H. Ghanbari and K. Scheinberg, Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates, Comput. Optim. Appl., 68 (2018), pp. 597-627.
[29] G. N. Grapiglia and Y. Nesterov, Accelerated regularized Newton methods for minimizing composite convex functions, SIAM J. Optim., 29 (2019), pp. 77-99, https://doi.org/10. 1137/17M1142077.
[30] G. N. Grapiglia and Y. Nesterov, Tensor Methods for Minimizing Functions with Hölder Continuous Higher-Order Derivatives, preprint, https://arxiv.org/abs/1904.12559, 2019.
[31] G. N. Grapiglia and Y. Nesterov, Tensor Methods for Finding Approximate Stationary Points of Convex Functions, preprint, https://arxiv.org/abs/1907.07053, 2019.
[32] B. Jiang, Z. Li, and S. Zhang, On cones of nonnegative quartic forms, Found. Comput. Math., 17 (2017), pp. 161-197.
[33] B. Jiang, T. Lin, and S. Zhang, A Unified Scheme to Accelerate Adaptive Cubic Regularization and Gradient Methods for Convex Optimization, preprint, https://arxiv.org/abs/1710.04788, 2017.
[34] B. Jiang, T. Lin, S. Ma, and S. Zhang, Structured nonconvex and nonsmooth optimization: Algorithms and iteration complexity analysis, Comput. Optim. Appl., 72 (2019), pp. 115-157.
[35] B. Jiang, H. Wang, and S. Zhang, An optimal high-order tensor method for convex optimization, Math. Oper. Res., to appear.
[36] A. Karparthy, A Peak at Trends in Machine Learning, Medium, posted April 7, 2017, https://medium.com/\{ @\} karpathy/a-peek-at-trends-in-machine-learning-ab8a1085a106.
[37] D. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, preprint, https://arxiv. org/abs/1412.6980, 2014.
[38] J. D. Lee, Y. Sun, and M. A. Saunders, Proximal Newton-type methods for minimizing composite functions, SIAM J. Optim., 24 (2014), pp. 1420-1443, https://doi.org/10.1137/130921428.
[39] Q. Lin and L. Xiao, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization, Comput. Optim. Appl., 60 (2014), pp. 633-674.
[40] D. G. Luenberger and Y. Ye, Linear and Nonlinear Programming, 4th ed., Internat. Ser. Oper. Res. Management Sci. 228, Springer, Cham, 2016.
[41] J. M. Martínez, On high-order model regularization for constrained optimization, SIAM J. Optim., 27 (2017), pp. 2447-2458, https://doi.org/10.1137/17M1115472.
[42] R. D. C. Monteiro, C. Ortiz, and B. F. Svaiter, An adaptive accelerated first-order method for convex optimization, Comput. Optim. Appl., 64 (2016), pp. 31-73.
[43] R. D. C. Monteiro and B. F. Svaiter, An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods, SIAM J. Optim., 23 (2013), pp. 1092-1125, https://doi.org/10.1137/110833786.
[44] Y. Nesterov, A method for unconstrained convex minimization problem with the rate of convergence O(1/k2), Dokl. Akad. Nauk SSSR, 269 (1983), pp. 543-547 (in Russian).
[45] Y. Nesterov and B. T. Polyak, Cubic regularization of Newton method and its global performance, Math. Program., 108 (2006), pp. 177-205.
[46] Y. Nesterov, Accelerating the cubic regularization of Newton's method on convex problems, Math. Program., 112 (2008), pp. 159-181.
[47] Y. Nesterov, Gradient methods for minimizing composite functions, Math. Program., 140 (2013), pp. 125-161.
[48] Y. Nesterov, Lectures on Convex Optimization, 2nd ed., Springer Optim. Appl. 137, Springer, Cham, 2018.
[49] Y. Nesterov, Implementable tensor methods in unconstrained convex optimization, Math. Program., to appear; published online Nov. 21, 2019, https://doi.org/10.1007/s10107-019-01449-1.
[50] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2006.
[51] K. Scheinberg, D. Goldfarb, and X. Bai, Fast first-order methods for composite convex optimization with backtracking, Found. Comput. Math., 14 (2014), pp. 389-417.
[52] K. Scheinberg and X. Tang, Practical inexact proximal quasi-Newton method with global complexity analysis, Math. Program., 160 (2016), pp. 495-529.
[53] T. Tieleman and G. Hinton, Lecture 6.5-RMSProp: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, 4 (2012), pp. 26-31.
[54] A. Wilson, L. Mackey, and A. Wibisono, Accelerating Rescaled Gradient Descent: Fast Optimization of Smooth Functions, preprint, https://arxiv.org/abs/1902.08825, 2019.
Citation statistics
Cited Times:9[WOS]   [WOS Record]     [Related Records in WOS]
Document TypeJournal article
Identifierhttps://irepository.cuhk.edu.cn/handle/3EPUXD0A/1785
CollectionSchool of Data Science
Corresponding AuthorJIANG, B.
Affiliation
1.Research Institute for Interdisciplinary Sciences, School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai, 200433, China
2.Department of Industrial Engineering and Operations Research, UC Berkeley, Berkeley, CA 94720, United States
3.Department of Industrial and Systems Engineering, University of Minnesota, Minneapolis, MN 55455, United States
4.Shenzhen Research Institute of Big Data, Chinese University of Hong Kong, Shenzhen, Hong Kong
Recommended Citation
GB/T 7714
JIANG, B.,LIN, T.,ZHANG, S. A unified adaptive tensor approximation scheme to accelerate composite convex optimization[J]. SIAM Journal on Optimization,2020.
APA JIANG, B., LIN, T., & ZHANG, S. (2020). A unified adaptive tensor approximation scheme to accelerate composite convex optimization. SIAM Journal on Optimization.
MLA JIANG, B.,et al."A unified adaptive tensor approximation scheme to accelerate composite convex optimization".SIAM Journal on Optimization (2020).
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[JIANG, B.]'s Articles
[LIN, T.]'s Articles
[ZHANG, S.]'s Articles
Baidu academic
Similar articles in Baidu academic
[JIANG, B.]'s Articles
[LIN, T.]'s Articles
[ZHANG, S.]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[JIANG, B.]'s Articles
[LIN, T.]'s Articles
[ZHANG, S.]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.