Details of Research Outputs

TitleUVeQFed: Universal Vector Quantization for Federated Learning
Author (Name in English or Pinyin)
Shlezinger, Nir1; Chen, Mingzhe2; Eldar, Yonina C.3; Poor, H. Vincent2; Cui, Shuguang4
Date Issued2021
Source PublicationIEEE TRANSACTIONS ON SIGNAL PROCESSING
ISSN1053-587X
DOI10.1109/TSP.2020.3046971
Indexed BySCIE
Firstlevel Discipline信息科学与系统科学
Education discipline科技类
Published range国外学术期刊
Volume Issue Pages卷: 69 页: 500-514
References
[1] N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor, and S. Cui, "Federated learning with quantization constraints, " in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2020, pp. 8851-8855.
[2] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning, " Nature, vol. 521, no. 7553, p. 436, 2015.
[3] J. Chen and X. Ran, "Deep learning with edge computing: A review, " Proc. IEEE, vol. 107, no. 8, pp. 1655-1674, Aug. 2019.
[4] J. Dean et al., "Large scale distributed deep networks, "in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 1223-1231.
[5] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Agüera y Arcas, "Communication-efficient learning of deep networks from decentralized data, " in Proc. Artif. Intell. Statist., 2017, pp. 1273-1282.
[6] K. Bonawitz et al., "Towards federated learning at scale: System design, " 2019, arXiv:1902. 01046.
[7] P. Kairouz et al. "Advances and open problems in federated learning, " 2019, arXiv:1912. 04977.
[8] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, "A joint learning and communications framework for federated learning over wireless networks, " 2019, arXiv:1909. 07972.
[9] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, "Federated learning: Challenges, methods, and future directions, " IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50-60, 2020.
[10] H. H. Yang, Z. Liu, T. Q. Quek, and H. V. Poor, "Scheduling policies for federated learning in wireless networks, " IEEE Trans. Commun., vol. 68, no. 1, pp. 317-333, 2019.
[11] M. M. Amiri, D. Gunduz, S. R. Kulkarni, and H. V. Poor, "Update aware device scheduling for federated learning at the wireless edge, " 2020, arXiv:2001. 10402.
[12] J. Kone?cný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, "Federated learning: Strategies for improving communication efficiency, " 2016, arXiv:1610. 05492.
[13] Y. Lin, S. Han, H. Mao, Y. Wang, andW. J. Dally, "Deep gradient compression: Reducing the communication bandwidth for distributed training, " in Proc. Int. Conf. Learn. Representations, 2018.
[14] C. Hardy, E. Le Merrer, and B. Sericola, "Distributed deep learning on edge-devices: Feasibility via adaptive compression, " in Proc. IEEE Int. Symp. Netw. Comput. Appl., 2017.
[15] A. F. Aji and K. Heafield, "Sparse communication for distributed gradient descent, " 2017, arXiv:1704. 05021.
[16] W. Wen et al., "Terngrad: Ternary gradients to reduce communication in distributed deep learning, " in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 1509-1519.
[17] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, "QSGD: Communication-efficient SGD via gradient quantization and encoding, " in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 1709-1720.
[18] S. Horvath, C.-Y. Ho, L. Horvath, A. N. Sahu, M. Canini, and P. Richtarik, "Natural compression for distributed deep learning, " 2019, arXiv:1905. 10988.
[19] A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, "FedPAQ: A. communication-efficient federated learning method with periodic averaging and quantization, " in Proc. Int. Conf. Artif. Intell. Statist., 2020, pp. 2021-2031.
[20] S. Horváth, D. Kovalev, K. Mishchenko, S. Stich, and P. Richtárik, "Stochastic distributed learning with gradient quantization and variance reduction, " 2019, arXiv:1904. 05115.
[21] J. Bernstein, Y.-X. Wang, K. Azizzadenesheli, and A. Anandkumar, "SignSGD: Compressed optimisation for non-convex problems, " in Proc. Int. Conf. Mach. Learn., 2018, pp. 560-569.
[22] Y. Polyanskiy and Y. Wu, "Lecture notes on information theory, " Lecture Notes 6. 441 (MIT), ECE563 (University of Illinois Urbana-Champaign), and STAT 664 (Yale), 2012-2017.
[23] R. Zamir and M. Feder, "On universal quantization by randomized uniform/ lattice quantizers, " IEEE Trans. Inf. Theory, vol. 38, no. 2, pp. 428-436, Mar. 1992.
[24] R. Zamir and M. Feder, "On lattice quantization noise, " IEEE Trans. Inf. Theory, vol. 42, no. 4, pp. 1152-1159, Jul. 1996.
[25] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, "On the convergence of fedavg on non-IID data, " in Proc. Int. Conf. Learn. Representations, 2019.
[26] speedtest. net, "Speedtest United States Market Report, " Jul. 9, 2019. [Online]. Available: https://www. speedtest. net/reports/united-states/
[27] R. M. Gray and D. L. Neuhoff, "Quantization, " IEEE Trans. Inf. Theory, vol. 44, no. 6, pp. 2325-2383, 1998.
[28] P. A. Chou, M. Effros, and R. M. Gray, "A vector quantization approach to universal noiseless coding and quantization, " IEEE Trans. Inf. Theory, vol. 42, no. 4, pp. 1109-1138, 1996.
[29] J. Ziv, "On universal quantization, " IEEE Trans. Inf. Theory, vol. IT-31, no. 3, pp. 344-347, May 1985.
[30] R. M. Gray and T. G. Stockham, "Dithered quantizers, " IEEE Trans. Inf. Theory, vol. 39, no. 3, pp. 805-812, May 1993.
[31] S. P. Lipshitz, R. A. Wannamaker, and J. Vanderkooy, "Quantization and dither: A theoretical survey, " J. Audio Eng. Soc., vol. 40, no. 5, pp. 355-375, May 1992.
[32] R. Zamir and T. Berger, "Multiterminal source coding with high resolution, " IEEE Trans. Inf. Theory, vol. 45, no. 1, pp. 106-117, Jan. 1999.
[33] A. Kirac and P. Vaidyanathan, "Results on lattice vector quantization with dithering, " IEEE Trans. Circuits Syst. II Analog Digit. Signal Process., vol. 43, no. 12, pp. 811-826, Dec. 1996.
[34] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups. vol. 290, Berlin, Germany: Springer Science Business Media, 2013.
[35] R. Rubinstein, "Generating random vectors uniformly distributed inside and on the surface of different regions, " Eur. J. Oper. Res., vol. 10, no. 2, pp. 205-209, Jun. 1982.
[36] T. C. Aysal, M. J. Coates, and M. G. Rabbat, "Distributed average consensus with dithered quantization, " IEEE Trans. Signal Process., vol. 56, no. 10, pp. 4905-4918, Oct. 2008.
[37] N. Shlezinger and Y. C. Eldar, "Task-based quantization with application to MIMO receivers, " 2020, arXiv:2002. 04290.
[38] N. Shlezinger, Y. C. Eldar, and M. R. Rodrigues, "Hardware-limited task-based quantization, " IEEE Trans. Signal Process., vol. 67, no. 20, pp. 5223-5238, Oct. 2019.
[39] N. Shlezinger, Y. C. Eldar, and M. R. Rodrigues, "Asymptotic task-based quantization with application to massive MIMO, " IEEE Trans. Signal Process., vol. 67, no. 15, pp. 3995-4012, Aug. 2019.
[40] S. Salamtian, N. Shlezinger, Y. C. Eldar, and M. Médard, "Task-based quantization for recovering quadratic functions using principal inertia components, " in Proc. IEEE Int. Symp. Inf. Theory, 2019, pp. 390-394.
[41] K. Yang, T. Jiang, Y. Shi, and Z. Ding, "Federated learning via over-the-air computation, " IEEE Trans. Wireless Commun., vol. 19, no. 3, pp. 2022-2035, Mar. 2020.
[42] T. Sery, N. Shlezinger, K. Cohen, and Y. C. Eldar, "Over-the-air federated learning from heterogeneous data, " 2020, arXiv:2009. 12787.
[43] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, andY.-J. A. Zhang, "The roadmap to 6G: AI empowered wireless networks, " IEEE Commun. Mag., vol. 57, no. 8, pp. 84-90, Aug. 2019.
[44] J. Kang, Z. Xiong, D. Niyato, S. Xie, and J. Zhang, "Incentive mechanism for reliable federated learning:Ajoint optimization approach to combining reputation and contract theory, " IEEE Internet Things J., vol. 6, no. 6, pp. 10 700-10 714, Dec. 2019.
[45] N. Shlezinger, S. Rini, and Y. C. Eldar, "The communication-aware clustered federated learning problem, " in Proc. IEEEInt. Symp. Inf. Theory, 2020, pp. 2610-2615.
[46] J. Kang, Z. Xiong, D. Niyato, Y. Zou, Y. Zhang, andM. Guizani, "Reliable federated learning formobile networks, " IEEEWireless Commun., vol. 27, no. 2, pp. 72-80, Apr. 2020.
[47] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, "Artificial neural networks-based machine learning for wireless networks: A tutorial, " IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3039-3071, Oct. /Dec. 2019.
[48] T. M. Cover and J. A. Thomas, Elements of Information Theory. Hoboken, NJ, USA: Wiley, 2012.
[49] A. Wyner and J. Ziv, "The rate-distortion function for source coding with side information at the decoder, " IEEE Trans. Inf. Theory, vol. IT-22, no. 1, pp. 1-10, Jan. 1976.
[50] E. Agrell andT. Eriksson, "Optimization of lattices for quantization, " IEEE Trans. Inf. Theory, vol. 44, no. 5, pp. 1814-1828, 1998.
[51] K. Ferentios, "On Tchebycheff's type inequalities, " Trabajos De Estadistica Y De Investigacion Operativa, vol. 33, no. 1, p. 125, 1982.
[52] S. U. Stich, "Local SGD converges fast and communicates little, " in Proc. Int. Conf. Learn. Representations, 2019.
[53] J. Conway and N. Sloane, "Voronoi regions of lattices, second moments of polytopes, and quantization, " IEEE Trans. Inf. Theory, vol. IT-28, no. 2, pp. 211-226, Mar. 1982.
[54] Y. Zhang, J. C. Duchi, and M. J. Wainwright, "Communication-efficient algorithms for statistical optimization, " J. Mach. Learn. Res., vol. 14, no. 1, pp. 3321-3363, Jan. 2013.
[55] A. Koloskova, N. Loizou, S. Boreiri, M. Jaggi, and S. U. Stich, "A unified theory of decentralized SGD with changing topology and local updates, " 2020, arXiv:2003. 10422.
[56] MathWorks Deep Learning Toolbox Team, "Deep learning tutorial series, " MATLAB central file exchange, Jun. 9, 2020. [Online]. Available: https:// www. mathworks. com/matlabcentral/fileexchange/62990-deep-learningtutorial-series
[57] G. An, "The effects of adding noise during backpropagation training on a generalization performance, " Neural Comput., vol. 8, no. 3, pp. 643-674, Apr. 1996.
Citation statistics
Cited Times [WOS]:0   [WOS Record]     [Related Records in WOS]
Document TypeJournal article
Identifierhttps://irepository.cuhk.edu.cn/handle/3EPUXD0A/1717
CollectionSchool of Science and Engineering
Corresponding AuthorShlezinger, Nir
Affiliation
1.Ben Gurion Univ Negev, Sch Elect & Comp Engn, IL-8410501 Beer Sheva, Israel
2.Princeton Univ, Elect Engn Dept, Princeton, NJ 08544 USA
3.Weizmann Inst Sci, Fac Math & Comp Sci, IL-7610001 Rehovot, Israel
4.Chinese Univ Hong Kong, Shenzhen Res Inst, Big Data & Future Network Intelligence Inst FNii, Shenzhen 518172, Peoples R China
Recommended Citation
GB/T 7714
Shlezinger, Nir,Chen, Mingzhe,Eldar, Yonina C.et al. UVeQFed: Universal Vector Quantization for Federated Learning[J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING,2021.
APA Shlezinger, Nir, Chen, Mingzhe, Eldar, Yonina C., Poor, H. Vincent, & Cui, Shuguang. (2021). UVeQFed: Universal Vector Quantization for Federated Learning. IEEE TRANSACTIONS ON SIGNAL PROCESSING.
MLA Shlezinger, Nir,et al."UVeQFed: Universal Vector Quantization for Federated Learning".IEEE TRANSACTIONS ON SIGNAL PROCESSING (2021).
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[Shlezinger, Nir]'s Articles
[Chen, Mingzhe]'s Articles
[Eldar, Yonina C.]'s Articles
Baidu academic
Similar articles in Baidu academic
[Shlezinger, Nir]'s Articles
[Chen, Mingzhe]'s Articles
[Eldar, Yonina C.]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Shlezinger, Nir]'s Articles
[Chen, Mingzhe]'s Articles
[Eldar, Yonina C.]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.