Details of Research Outputs

TitleOntology-Based Global and Collective Motion Patterns for Event Classification in Basketball Videos
Author (Name in English or Pinyin)
Wu, L.1; Yang, Z.1; He, J.1; Jian, M.1; Xu, Y.1; Xu, D.1; Chen, C.W.2
Date Issued2020-07-01
Source PublicationIEEE Transactions on Circuits and Systems for Video Technology
ISSN10518215
DOI10.1109/TCSVT.2019.2912529
Education discipline科技类
Published range国外学术期刊
Volume Issue Pages卷: 30 期: 7 页: 2178-2190
References
[1] A. Krizhevsky, I. Sutskever, and G. Hinton, "ImageNet classification with deep convolutional neural networks," in Proc. Int. Conf. Neural Inf. Process. Syst., vol. 60, 2012, pp. 1097-1106.
[2] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in Proc. Int. Conf. Learn. Represent., 2015, pp. 1-14.
[3] C. Szegedy et al., "Going deeper with convolutions," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1-9.
[4] S. Ji, W. Xu, M. Yang, and K. Yu, "3D convolutional neural networks for human action recognition," IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1, pp. 221-231, Jan. 2013.
[5] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, "Long-term recurrent convolutional networks for visual recognition and description," IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 4, p. 677, 2014.
[6] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. (2014). "Learning spatiotemporal features with 3D convolutional networks." [Online]. Available: https://arxiv.org/abs/1412.0767
[7] Y. H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, "Beyond short snippets: Deep networks for video classification," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 4694-4702.
[8] L. Yao et al., "Describing videos by exploiting temporal structure," in Proc. IEEE Conf. Int. Conf. Comput. Vis., Dec. 2015, pp. 4507-4515.
[9] K. Soomro, A. R. Zamir, and M. Shah. (2012). "UCF101: A dataset of 101 human actions classes from videos in the wild." [Online]. Available: https://arxiv.org/abs/1212.0402
[10] G. Award, J. Fiscus, and W. Kraaij, "Trecvid 2014-An overview of the goals, tasks, data, evaluation mechanisms and metrics," in Proc. TRECVID, 2014, p. 52.
[11] V. Ramanathan, J. Huang, S. Abu-El-Haija, A. Gorban, K. Murphy, and F. Li, "Detecting events and key actors in multi-person videos," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 3043-3053.
[12] A. Gorban et al. (2015). THUMOS Challenge: Action Recognition With a Large Number of Classes. [Online]. Available: http://www.thumos.info/
[13] P. K. Rana, J. Taghia, Z. Ma, and M. Flierl, "Probabilistic multiview depth image enhancement using variational inference," IEEE J. Sel. Topics Signal Process., vol. 9, no. 3, pp. 435-448, Apr. 2015.
[14] Y.-G. Jiang, Q. Dai, X. Xue, W. Liu, and C.-W. Ngo, "Trajectory-based modeling of human actions with motion reference points," in Proc. Eur. Conf. Comput. Vis., 2012, pp. 425-438.
[15] H. Wang, A. Kläser, C. Schmid, and L. Cheng-Lin, "Action recognition by dense trajectories," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2011, pp. 3169-3176.
[16] P. Wang, Y. Cao, C. Shen, L. Liu, and H. T. Shen, "Temporal pyramid pooling based convolutional neural networks for action recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2017, pp. 2613-2622.
[17] M. S. Ibrahim, S. Muralidharan, Z. Deng, A. Vahdat, and G. Mori, "A hierarchical deep temporal model for group activity recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 1971-1980.
[18] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, "Large-scale video classification with convolutional neural networks," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 1725-1732.
[19] L.Wang et al., "Temporal segment networks: Towards good practices for deep action recognition," ACM Trans. Inf. Syst., vol. 22, no. 1, pp. 20-36, 2016.
[20] L. Kratz and K. Nishino, "Anomaly detection in extremely crowded scenes using spatio-temporal motion pattern models," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 1446-1453.
[21] V. Mahadevan, W. Li, V. Bhalodia, and N. Vasconcelos, "Anomaly detection in crowded scenes," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2010, pp. 1975-1981.
[22] C. Loy, T. Xiang, and S. Gong, "Multi-camera activity correlation analysis," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 1988-1995.
[23] F. Xiong, X. Shi, and D.-Y. Yeung. (2017). "Spatiotemporal modeling for crowd counting in videos." [Online]. Available: https://arxiv. org/abs/1707.07890
[24] Y. H. Chen and L. Y. Deng, "Event mining and indexing in basketball video," in Proc. IEEE Conf. Int. Conf. Genetic Evol. Comput., Aug. 2011, pp. 247-251.
[25] T.-S. Fu, H.-T. Chen, C.-L. Chou, and W.-J. Tsai, "Screen-strategy analysis in broadcast basketball video using player tracking," in Proc. IEEE Conf. Vis. Commun. Image Process., Nov. 2011, pp. 1-4.
[26] I. Atmosukarto, B. Ghanem, S. Ahuja, K. Muthuswamy, and N. Ahuja, "Automatic recognition of offensive team formation in american football plays," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2013, pp. 991-998.
[27] N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2005, pp. 886-893.
[28] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, "Learning realistic human actions from movies," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2008, pp. 1-8.
[29] C. Feichtenhofer, A. Pinz, and A. Zisserman, "Convolutional two-stream network fusion for video action recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 1933-1941.
[30] K. Simonyan and A. Zisserman, "Two-stream convolutional networks for action recognition in videos," in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 568-576.
[31] Y. Wang, M. Long, J. Wang, and P. S. Yu, "Spatiotemporal pyramid network for video action recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 2097-2106.
[32] N. Vaswani, A. R. Chowdhury, and R. Chellappa, "Activity recognition using the dynamics of the configuration of interacting objects," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. 2, Jun. 2003, pp. II-633-II-640.
[33] S. S. Intille and A. F. Bobick, "Recognizing planned, multiperson action," Comput. Vis. Image Understand., vol. 81, no. 3, pp. 414-445, 2001.
[34] P. Campr, M. Herbig, J. Vanek, and J. Psutka, "Sports video classification in continuous TV broadcasts," in Proc. IEEE Conf. Int. Conf. Signal Process., Oct. 2014, pp. 648-652.
[35] D. Moore and I. Essa, "Recognizing multitasked activities from video using stochastic context-free grammar," in Proc. AAAI Nat. Conf. Artif. Intell., 2002, pp. 770-776.
[36] S. M. Khan and M. Shah, "Detecting group activities using rigidity of formation," in Proc. ACM Int. Conf. Multimedia, 2005, pp. 403-406.
[37] D. B. Sam, S. Surya, and R. V. Babu, "Switching convolutional neural network for crowd counting," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 4031-4039.
[38] T. Brox, A. Bruhn, N. Papenberg, and J. Weickert, "High accuracy optical flow estimation based on a theory for warping," in Proc. Eur. Conf. Comput. Vis., 2004, pp. 25-36.
[39] Z. Ma, J.-H. Xue, A. Leijon, Z.-H. Tan, Z. Yang, and J. Guo, "Decorrelation of neutral vector variables: Theory and applications," IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 1, pp. 129-143, Jan. 2018.
[40] S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Comput., vol. 9, no. 8, pp. 1735-1780, 1997.
[41] L. Wu, J. He, M. Jian, S. Liu, and Y. Xu, "Global motion pattern based event recognition in multi-person videos," in Proc. CCF Conf. Chin. Conf. Comput. Vis., 2017, pp. 667-676.
[42] Y. Jia et al., "Caffe: Convolutional architecture for fast feature embedding," in Proc. ACM Int. Conf. Multimedia, 2014, pp. 675-678.
[43] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell, "Action-VLAD: Learning spatio-temporal aggregation for action classification," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jul. 2017, pp. 3165-3174.
Citation statistics
Cited Times:17[WOS]   [WOS Record]     [Related Records in WOS]
Document TypeJournal article
Identifierhttps://irepository.cuhk.edu.cn/handle/3EPUXD0A/1620
CollectionSchool of Science and Engineering
Corresponding AuthorJian, M.
Affiliation
1.Faculty of Information Technology, College of Information and Communication Engineering, Beijing University of Technology, Beijing, China
2.School of Science and Engineering, Chinese University of Hong Kong, Shenzhen, Hong Kong
Recommended Citation
GB/T 7714
Wu, L.,Yang, Z.,He, J.et al. Ontology-Based Global and Collective Motion Patterns for Event Classification in Basketball Videos[J]. IEEE Transactions on Circuits and Systems for Video Technology,2020.
APA Wu, L., Yang, Z., He, J., Jian, M., Xu, Y., .. & Chen, C.W. (2020). Ontology-Based Global and Collective Motion Patterns for Event Classification in Basketball Videos. IEEE Transactions on Circuits and Systems for Video Technology.
MLA Wu, L.,et al."Ontology-Based Global and Collective Motion Patterns for Event Classification in Basketball Videos".IEEE Transactions on Circuits and Systems for Video Technology (2020).
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[Wu, L.]'s Articles
[Yang, Z.]'s Articles
[He, J.]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wu, L.]'s Articles
[Yang, Z.]'s Articles
[He, J.]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wu, L.]'s Articles
[Yang, Z.]'s Articles
[He, J.]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.