Title | SRGC-Nets: Sparse Repeated Group Convolutional Neural Networks |
Author (Name in English or Pinyin) | |
Date Issued | 2019-09-09 |
Source Publication | IEEE Transactions on Neural Networks and Learning Systems |
DOI | 10.1109/TNNLS.2019.2933665 |
Indexed By | EI |
Funding Project | 国家自然科学基金项目 |
Firstlevel Discipline | 计算机科学技术 |
Education discipline | 科技类 |
Published range | 国外学术期刊 |
References | [1] P. V. S. Ponnapalli, K. C. Ho, and M. Thomson, "A formal selection and pruning algorithm for feedforward artificial neural network optimization," IEEE Trans. Neural Netw., vol. 10, no. 4, pp. 964-968, Jul. 1999. [2] A. P. Engelbrecht, "A new pruning heuristic based on variance analysis of sensitivity information," IEEE Trans. Neural Netw., vol. 12, no. 6, pp. 1386-1399, Nov. 2001. [3] P. Lauret, E. Fock, and T. A. Mara, "A node pruning algorithm based on a Fourier amplitude sensitivity test method," IEEE Trans. Neural Netw., vol. 17, no. 2, pp. 273-293, Mar. 2006. [4] J. Wang, C. Xu, X. Yang, and J. M. Zurada, "A novel pruning algorithm for smoothing feedforward neural networks based on group lasso method," IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 5, pp. 2012-2024, 2018. [5] X. Dong, S. Chen, and S. Pan, "Learning to prune deep neural networks via layer-wise optimal brain surgeon," in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 4860-4874. [6] Y. He, X. Zhang, and J. Sun, "Channel pruning for accelerating very deep neural networks," in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 1398-1406. [7] J.-H. Luo, J. Wu, and W. Lin, "ThiNet: A filter level pruning method for deep neural network compression," in Proc. ICCV, Oct. 2017, pp. 5068-5076. [8] T. Yang, Y. Chen, and V. Sze, "Designing energy-efficient convolutional neural networks using energy-aware pruning," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 6071-6079. doi: 10.1109/CVPR.2017.643. [9] Y. Hu, S. Sun, J. Li, X. Wang, and Q. Gu, "A novel channel pruning method for deep neural network compression," CoRR, May 2018. [Online]. Available: http://arxiv.org/abs/1805.11394 [10] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, "Binarized neural networks," in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 4107-4115. [11] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, "XNORNet: ImageNet classification using binary convolutional neural networks," in Computer Vision-ECCV, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham, Switzerland: Springer, 2016, pp. 525-542. [12] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng, "Quantized convolutional neural networks for mobile devices," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 4820-4828. [13] X. Lin, C. Zhao, and W. Pan, "Towards accurate binary convolutional neural network," in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 344-352. [14] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen, "Incremental network quantization: Towards lossless CNNS with low-precision weights," CoRR, Feb. 2017. [Online]. Available: http://arxiv.org/abs/1702.03044 [15] P. Gysel, J. Pimentel, M. Motamedi, and S. Ghiasi, "Ristretto: A framework for empirical study of resource-efficient inference in convolutional neural networks," IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 11, pp. 5784-5789, Nov. 2018. [16] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 770-778. [17] C. Szegedy, W. Liu, Y. Jia, and P. Sermanet, "Going deeper with convolutions," in Proc. Comput. Vis. Pattern Recognit., 2014, pp. 1-9. [18] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in Proc. Int. Conf. Mach. Learn., 2015, pp. 448-456. [19] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z.Wojna, "Rethinking the inception architecture for computer vision," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 2818-2826. [20] G. Huang, Z. Liu, L. V. D. Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 2261-2269. [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Proc. Adv. Neural Inf. Process. Syst., 2012, vol. 25, no. 2, pp. 1097-1105. [22] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 5987-5995. [23] A. G. Howard et al., "MobileNets: Efficient convolutional neural networks for mobile vision applications," CoRR, Apr. 2017. [Online]. Available: http://arxiv.org/abs/1704.04861 [24] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "MobileNetV2: Inverted residuals and linear bottlenecks," in Proc. CVPR, Jun. 2018, pp. 4510-4520. [25] T. Zhang, G.-J. Qi, B. Xiao, and J. Wang, "Interleaved group convolutions," in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 4383-4392. [26] G. Xie, J. Wang, T. Zhang, J. Lai, R. Hong, and G.-J. Qi, "Interleaved structured sparse convolutional neural networks," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 8847-8856. [27] K. Sun, M. Li, D. Liu, and J. Wang, "IGCV3: Interleaved low-rank group convolutions for efficient deep neural networks," CoRR, Jun. 2018. [Online]. Available: https://arxiv.org/abs/1806.00178 [28] X. Zhang, X. Zhou, M. Lin, and J. Sun, "ShuffleNet: An extremely efficient convolutional neural network for mobile devices," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2018, pp. 6848-6856. [29] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, "ShuffleNet V2: Practical guidelines for efficient CNN architecture design," in Computer Vision- ECCV, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds. vol. 11218. Cham, Switzerland: Springer, 2018, pp. 122-138. [30] Y. Chen, X. Jin, B. Kang, J. Feng, and S. Yan, "Sharing residual units through collective tensor factorization in deep neural networks," CoRR, Mar. 2017. [Online]. Available: http://arxiv.org/abs/1703.02180 [31] J.-T. Chien and Y.-T. Bao, "Tensor-factorized neural networks," IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 5, pp. 1998-2011, May 2018. [32] J. Wu, D. Li, Y. Yang, C. Bajaj, and X. Ji, "Dynamic sampling convolutional neural networks," CoRR, Mar. 2018. [Online]. Available: http://arxiv.org/abs/1803.07624 [33] S. Zagoruyko and N. Komodakis, "Wide residual networks," in Proc. Brit. Mach. Vis. Conf. (BMVC), R. C. Wilson, E. R. Hancock, and W. A. P. Smith, Eds. Sep. 2016, pp. 87.1-87.12. doi: 10.5244/C.30.87. [34] A. Krizhevsky and G. Hinton, "Learning multiple layers of features from tiny images," Dept. Comput. Sci., Univ. Toronto, Toronto, ON, Canada, Tech. Rep., Jan. 2009, vol. 1. [35] X. Glorot, A. Bordes, and Y. Bengio, "Deep sparse rectifier neural networks," in Proc. 14th Int. Conf. Artif. Intell. Statist., 2011, pp. 315-323. [36] F. Chollet, "Xception: Deep learning with depthwise separable convolutions," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 1251-1258. [37] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "ImageNet: A large-scale hierarchical image database," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009, pp. 248-255. [38] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Weinberger, "Deep networks with stochastic depth," in Proc. Eur. Conf. Comput. Vis. Amsterdam, The Netherlands: Springer, 2016, pp. 646-661. [39] P. Chrabaszcz, I. Loshchilov, and F. Hutter, "A downsampled variant of ImageNet as an alternative to the CIFAR datasets," CoRR, Jul. 2017. [Online]. Available: http://arxiv.org/abs/1707.08819 [40] A. Torralba, R. Fergus, and W. T. Freeman, "80 million tiny images: A large data set for nonparametric object and scene recognition," IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 11, pp. 1958-1970, Nov. 2008. [41] G. Larsson, M. Maire, and G. Shakhnarovich, "FractalNet: Ultradeep neural networks without residuals," CoRR, May 2016. [Online]. Available: http://arxiv.org/abs/1605.07648 [42] L. N. Darlow, E. J. Crowley, A. Antoniou, and A. J. Storkey, "CINIC-10 is not ImageNet or CIFAR-10," CoRR, Aug. 2018. [Online]. Available: http://arxiv.org/abs/1810.03505 [43] I. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the importance of initialization and momentum in deep learning," in Proc. Int. Conf. Mach. Learn., 2013, pp. 1139-1147. [44] G. Huang, Y. Li, G. Pleiss, Z. Liu, J. E. Hopcroft, and K. Q. Weinberger, "Snapshot ensembles: Train 1, get M for free," CoRR, Apr. 2017. [Online]. Available: http://arxiv.org/abs/1704.00109 [45] J. Wang, Z. Wei, T. Zhang, and W. Zeng, "Deeply-fused nets," CoRR, May 2016. [Online]. Available: http://arxiv.org/abs/1605.07716 [46] S. Singh, D. Hoiem, and D. Forsyth, "Swapout: Learning an ensemble of deep architectures," in Proc. Adv. Neural Inf. Process. Syst., 2016, pp. 28-36. [47] L. Zhao, J. Wang, X. Li, Z. Tu, and W. Zeng, "On the connection of deep fusion to ensembling," CoRR, Nov. 2016. [Online]. Available: http://arxiv.org/abs/1611.07718 [48] K. He, X. Zhang, S. Ren, and J. Sun, "Identity mappings in deep residual networks," in Proc. Eur. Conf. Comput. Vis. Amsterdam, The Netherlands: Springer, 2016, pp. 630-645. [49] Y. Ioannou, D. Robertson, R. Cipolla, and A. Criminisi, "Deep roots: Improving CNN efficiency with hierarchical filter groups," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2017, pp. 1231-1240. [50] Y. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin, "Compression of deep convolutional neural networks for fast and low power mobile applications," CoRR, Nov. 2015. [Online]. Available: http://arxiv.org/abs/1511.06530 |
Citation statistics | |
Document Type | Journal article |
Identifier | https://irepository.cuhk.edu.cn/handle/3EPUXD0A/1406 |
Collection | School of Data Science |
Corresponding Author | Guangming Lu |
Affiliation | 1.Harbin Institute of Technology (Shenzhen), Shenzhen 518055, China 2.理工学院 3.College of Information Science and Technology, University of Science and Technology of China, Hefei 230052, China 4.数据科学学院 |
Recommended Citation GB/T 7714 | Yao Lu,Guangming Lu,Rui Linet al. SRGC-Nets: Sparse Repeated Group Convolutional Neural Networks[J]. IEEE Transactions on Neural Networks and Learning Systems,2019. |
APA | Yao Lu, Guangming Lu, Rui Lin, Jinxing Li, & David Zhang. (2019). SRGC-Nets: Sparse Repeated Group Convolutional Neural Networks. IEEE Transactions on Neural Networks and Learning Systems. |
MLA | Yao Lu,et al."SRGC-Nets: Sparse Repeated Group Convolutional Neural Networks".IEEE Transactions on Neural Networks and Learning Systems (2019). |
Files in This Item: | ||||||
There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment