Title  UVeQFed: Universal Vector Quantization for Federated Learning 
Author (Name in English or Pinyin)  
Date Issued  2021 
Source Publication  IEEE TRANSACTIONS ON SIGNAL PROCESSING 
ISSN  1053587X 
DOI  10.1109/TSP.2020.3046971 
Indexed By  SCIE 
Firstlevel Discipline  信息科学与系统科学 
Education discipline  科技类 
Published range  国外学术期刊 
Volume Issue Pages  卷: 69 页: 500514 
References  [1] N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor, and S. Cui, "Federated learning with quantization constraints, " in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2020, pp. 88518855. [2] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning, " Nature, vol. 521, no. 7553, p. 436, 2015. [3] J. Chen and X. Ran, "Deep learning with edge computing: A review, " Proc. IEEE, vol. 107, no. 8, pp. 16551674, Aug. 2019. [4] J. Dean et al., "Large scale distributed deep networks, "in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 12231231. [5] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Agüera y Arcas, "Communicationefficient learning of deep networks from decentralized data, " in Proc. Artif. Intell. Statist., 2017, pp. 12731282. [6] K. Bonawitz et al., "Towards federated learning at scale: System design, " 2019, arXiv:1902. 01046. [7] P. Kairouz et al. "Advances and open problems in federated learning, " 2019, arXiv:1912. 04977. [8] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, "A joint learning and communications framework for federated learning over wireless networks, " 2019, arXiv:1909. 07972. [9] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, "Federated learning: Challenges, methods, and future directions, " IEEE Signal Process. Mag., vol. 37, no. 3, pp. 5060, 2020. [10] H. H. Yang, Z. Liu, T. Q. Quek, and H. V. Poor, "Scheduling policies for federated learning in wireless networks, " IEEE Trans. Commun., vol. 68, no. 1, pp. 317333, 2019. [11] M. M. Amiri, D. Gunduz, S. R. Kulkarni, and H. V. Poor, "Update aware device scheduling for federated learning at the wireless edge, " 2020, arXiv:2001. 10402. [12] J. Kone?cný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, "Federated learning: Strategies for improving communication efficiency, " 2016, arXiv:1610. 05492. [13] Y. Lin, S. Han, H. Mao, Y. Wang, andW. J. Dally, "Deep gradient compression: Reducing the communication bandwidth for distributed training, " in Proc. Int. Conf. Learn. Representations, 2018. [14] C. Hardy, E. Le Merrer, and B. Sericola, "Distributed deep learning on edgedevices: Feasibility via adaptive compression, " in Proc. IEEE Int. Symp. Netw. Comput. Appl., 2017. [15] A. F. Aji and K. Heafield, "Sparse communication for distributed gradient descent, " 2017, arXiv:1704. 05021. [16] W. Wen et al., "Terngrad: Ternary gradients to reduce communication in distributed deep learning, " in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 15091519. [17] D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, "QSGD: Communicationefficient SGD via gradient quantization and encoding, " in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 17091720. [18] S. Horvath, C.Y. Ho, L. Horvath, A. N. Sahu, M. Canini, and P. Richtarik, "Natural compression for distributed deep learning, " 2019, arXiv:1905. 10988. [19] A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, "FedPAQ: A. communicationefficient federated learning method with periodic averaging and quantization, " in Proc. Int. Conf. Artif. Intell. Statist., 2020, pp. 20212031. [20] S. Horváth, D. Kovalev, K. Mishchenko, S. Stich, and P. Richtárik, "Stochastic distributed learning with gradient quantization and variance reduction, " 2019, arXiv:1904. 05115. [21] J. Bernstein, Y.X. Wang, K. Azizzadenesheli, and A. Anandkumar, "SignSGD: Compressed optimisation for nonconvex problems, " in Proc. Int. Conf. Mach. Learn., 2018, pp. 560569. [22] Y. Polyanskiy and Y. Wu, "Lecture notes on information theory, " Lecture Notes 6. 441 (MIT), ECE563 (University of Illinois UrbanaChampaign), and STAT 664 (Yale), 20122017. [23] R. Zamir and M. Feder, "On universal quantization by randomized uniform/ lattice quantizers, " IEEE Trans. Inf. Theory, vol. 38, no. 2, pp. 428436, Mar. 1992. [24] R. Zamir and M. Feder, "On lattice quantization noise, " IEEE Trans. Inf. Theory, vol. 42, no. 4, pp. 11521159, Jul. 1996. [25] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, "On the convergence of fedavg on nonIID data, " in Proc. Int. Conf. Learn. Representations, 2019. [26] speedtest. net, "Speedtest United States Market Report, " Jul. 9, 2019. [Online]. Available: https://www. speedtest. net/reports/unitedstates/ [27] R. M. Gray and D. L. Neuhoff, "Quantization, " IEEE Trans. Inf. Theory, vol. 44, no. 6, pp. 23252383, 1998. [28] P. A. Chou, M. Effros, and R. M. Gray, "A vector quantization approach to universal noiseless coding and quantization, " IEEE Trans. Inf. Theory, vol. 42, no. 4, pp. 11091138, 1996. [29] J. Ziv, "On universal quantization, " IEEE Trans. Inf. Theory, vol. IT31, no. 3, pp. 344347, May 1985. [30] R. M. Gray and T. G. Stockham, "Dithered quantizers, " IEEE Trans. Inf. Theory, vol. 39, no. 3, pp. 805812, May 1993. [31] S. P. Lipshitz, R. A. Wannamaker, and J. Vanderkooy, "Quantization and dither: A theoretical survey, " J. Audio Eng. Soc., vol. 40, no. 5, pp. 355375, May 1992. [32] R. Zamir and T. Berger, "Multiterminal source coding with high resolution, " IEEE Trans. Inf. Theory, vol. 45, no. 1, pp. 106117, Jan. 1999. [33] A. Kirac and P. Vaidyanathan, "Results on lattice vector quantization with dithering, " IEEE Trans. Circuits Syst. II Analog Digit. Signal Process., vol. 43, no. 12, pp. 811826, Dec. 1996. [34] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups. vol. 290, Berlin, Germany: Springer Science Business Media, 2013. [35] R. Rubinstein, "Generating random vectors uniformly distributed inside and on the surface of different regions, " Eur. J. Oper. Res., vol. 10, no. 2, pp. 205209, Jun. 1982. [36] T. C. Aysal, M. J. Coates, and M. G. Rabbat, "Distributed average consensus with dithered quantization, " IEEE Trans. Signal Process., vol. 56, no. 10, pp. 49054918, Oct. 2008. [37] N. Shlezinger and Y. C. Eldar, "Taskbased quantization with application to MIMO receivers, " 2020, arXiv:2002. 04290. [38] N. Shlezinger, Y. C. Eldar, and M. R. Rodrigues, "Hardwarelimited taskbased quantization, " IEEE Trans. Signal Process., vol. 67, no. 20, pp. 52235238, Oct. 2019. [39] N. Shlezinger, Y. C. Eldar, and M. R. Rodrigues, "Asymptotic taskbased quantization with application to massive MIMO, " IEEE Trans. Signal Process., vol. 67, no. 15, pp. 39954012, Aug. 2019. [40] S. Salamtian, N. Shlezinger, Y. C. Eldar, and M. Médard, "Taskbased quantization for recovering quadratic functions using principal inertia components, " in Proc. IEEE Int. Symp. Inf. Theory, 2019, pp. 390394. [41] K. Yang, T. Jiang, Y. Shi, and Z. Ding, "Federated learning via overtheair computation, " IEEE Trans. Wireless Commun., vol. 19, no. 3, pp. 20222035, Mar. 2020. [42] T. Sery, N. Shlezinger, K. Cohen, and Y. C. Eldar, "Overtheair federated learning from heterogeneous data, " 2020, arXiv:2009. 12787. [43] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, andY.J. A. Zhang, "The roadmap to 6G: AI empowered wireless networks, " IEEE Commun. Mag., vol. 57, no. 8, pp. 8490, Aug. 2019. [44] J. Kang, Z. Xiong, D. Niyato, S. Xie, and J. Zhang, "Incentive mechanism for reliable federated learning:Ajoint optimization approach to combining reputation and contract theory, " IEEE Internet Things J., vol. 6, no. 6, pp. 10 70010 714, Dec. 2019. [45] N. Shlezinger, S. Rini, and Y. C. Eldar, "The communicationaware clustered federated learning problem, " in Proc. IEEEInt. Symp. Inf. Theory, 2020, pp. 26102615. [46] J. Kang, Z. Xiong, D. Niyato, Y. Zou, Y. Zhang, andM. Guizani, "Reliable federated learning formobile networks, " IEEEWireless Commun., vol. 27, no. 2, pp. 7280, Apr. 2020. [47] M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, "Artificial neural networksbased machine learning for wireless networks: A tutorial, " IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 30393071, Oct. /Dec. 2019. [48] T. M. Cover and J. A. Thomas, Elements of Information Theory. Hoboken, NJ, USA: Wiley, 2012. [49] A. Wyner and J. Ziv, "The ratedistortion function for source coding with side information at the decoder, " IEEE Trans. Inf. Theory, vol. IT22, no. 1, pp. 110, Jan. 1976. [50] E. Agrell andT. Eriksson, "Optimization of lattices for quantization, " IEEE Trans. Inf. Theory, vol. 44, no. 5, pp. 18141828, 1998. [51] K. Ferentios, "On Tchebycheff's type inequalities, " Trabajos De Estadistica Y De Investigacion Operativa, vol. 33, no. 1, p. 125, 1982. [52] S. U. Stich, "Local SGD converges fast and communicates little, " in Proc. Int. Conf. Learn. Representations, 2019. [53] J. Conway and N. Sloane, "Voronoi regions of lattices, second moments of polytopes, and quantization, " IEEE Trans. Inf. Theory, vol. IT28, no. 2, pp. 211226, Mar. 1982. [54] Y. Zhang, J. C. Duchi, and M. J. Wainwright, "Communicationefficient algorithms for statistical optimization, " J. Mach. Learn. Res., vol. 14, no. 1, pp. 33213363, Jan. 2013. [55] A. Koloskova, N. Loizou, S. Boreiri, M. Jaggi, and S. U. Stich, "A unified theory of decentralized SGD with changing topology and local updates, " 2020, arXiv:2003. 10422. [56] MathWorks Deep Learning Toolbox Team, "Deep learning tutorial series, " MATLAB central file exchange, Jun. 9, 2020. [Online]. Available: https:// www. mathworks. com/matlabcentral/fileexchange/62990deeplearningtutorialseries [57] G. An, "The effects of adding noise during backpropagation training on a generalization performance, " Neural Comput., vol. 8, no. 3, pp. 643674, Apr. 1996. 
Citation statistics 
Cited Times [WOS]:0
[WOS Record]
[Related Records in WOS]

Document Type  Journal article 
Identifier  https://irepository.cuhk.edu.cn/handle/3EPUXD0A/1717 
Collection  School of Science and Engineering 
Corresponding Author  Shlezinger, Nir 
Affiliation  1.Ben Gurion Univ Negev, Sch Elect & Comp Engn, IL8410501 Beer Sheva, Israel 2.Princeton Univ, Elect Engn Dept, Princeton, NJ 08544 USA 3.Weizmann Inst Sci, Fac Math & Comp Sci, IL7610001 Rehovot, Israel 4.Chinese Univ Hong Kong, Shenzhen Res Inst, Big Data & Future Network Intelligence Inst FNii, Shenzhen 518172, Peoples R China 
Recommended Citation GB/T 7714  Shlezinger, Nir,Chen, Mingzhe,Eldar, Yonina C.et al. UVeQFed: Universal Vector Quantization for Federated Learning[J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING,2021. 
APA  Shlezinger, Nir, Chen, Mingzhe, Eldar, Yonina C., Poor, H. Vincent, & Cui, Shuguang. (2021). UVeQFed: Universal Vector Quantization for Federated Learning. IEEE TRANSACTIONS ON SIGNAL PROCESSING. 
MLA  Shlezinger, Nir,et al."UVeQFed: Universal Vector Quantization for Federated Learning".IEEE TRANSACTIONS ON SIGNAL PROCESSING (2021). 
Files in This Item:  
There are no files associated with this item. 
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment