Details of Research Outputs

TitleResidual MeshNet: Learning to deform meshes for single-view 3D reconstruction
Author (Name in English or Pinyin)
Pan, Junyi1; Li, Jun2; Han, Xiaoguang3; Jia, Kui1
Date Issued2018-10-12
Conference NameProceedings - 2018 International Conference on 3D Vision, 3DV 2018
Source PublicationProc. - Int. Conf. 3D Vis., 3DV
Conference PlaceVerona, Italy
Indexed ByEI
Firstlevel Discipline计算机科学技术
Education discipline科技类
Published range国外学术期刊
Volume Issue Pagesp719-727
[1] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512. 03012 [cs. GR], Stanford University-Princeton University-Toyota Technological Institute at Chicago, 2015. 6
[2] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3DR2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction. In ECCV, 2016. 1, 2, 6
[3] P. Cignoni, C. Rocchini, and R. Scopigno. Metro: Measuring error on simplified surfaces. Comput. Graph. Forum, 17(2):167-174, 1998. 3
[4] M. do Carmo. Riemannian Geometry. Mathematics (Boston, Mass.). Birkhäuser, 1992. 6
[5] H. Fan, H. Su, and L. J. Guibas. A point set generation network for 3D object reconstruction from a single image. In CVPR, 2017. 1, 2, 3, 5
[6] T. Groueix, M. Fisher, V. Kim, B. Russell, and M. Aubry. Atlasnet: A papier-mâché approach to learning 3d surface generation. In CVPR 2018, 2018. 1, 2, 3, 6
[7] C. Häne, S. Tulsiani, and J. Malik. Hierarchical Surface Prediction for 3D Object Reconstruction. 2017. 2
[8] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. 2
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 2, 3, 5
[10] A. Knapitsch, J. Park, Q. Zhou, and V. Koltun. Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph., 36(4):78:1-78:13, 2017. 3
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems, pages 1097-1105, 2012. 2
[12] Y. Lecun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436, 2015. 2
[13] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91-110, 2004. 2
[14] A. Mishchuk, D. Mishkin, F. Radenovic, and J. Matas. Working hard to know your neighbor's margins: Local descriptor learning loss. In NIPS, pages 4829-4840, 2017. 2
[15] D. Mishkin, F. Radenovic, and J. Matas. Learning discriminative affine regions via discriminability. CoRR, abs/1711. 06704, 2017. 2
[16] M. Perdoch, O. Chum, and J. Matas. Efficient representation of local geometry for large scale object retrieval. In CVPR, pages 9-16, 2009. 2
[17] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In CVPR, pages 601-610, 2017. 3
[18] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. 2017. 3
[19] G. Riegler, A. O. Ulusoy, and A. Geiger. OctNet: Learning deep 3D representations at high resolutions. In Proceedings-30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pages 6620-6629, 2017. 1, 2
[20] E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski. ORB: An efficient alternative to SIFT or SURF. In ICCV, pages 2564-2571, 2011. 2
[21] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover's distance as a metric for image retrieval. International Journal of Computer Vision, 40(2):99-121, 2000. 3
[22] R. B. Rusu, N. Blodow, and M. Beetz. Fast point feature histograms (FPFH) for 3D registration. In ICRA, pages 3212-3217, 2009. 2
[23] F. Scarselli, M. Gori, A. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2009. 3
[24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409. 1556, 2014. 2
[25] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. pages 4278-4284, 2017. 2
[26] M. Tatarchenko, A. Dosovitskiy, and T. Brox. Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs. In ICCV, pages 2107-2115, 2017. 1, 2
[27] N. Wang, Y. Zhang, Z. Li, Y. Fu, W. Liu, and Y.-G. Jiang. Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images. 2018. 1, 3
[28] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning single-view 3D object reconstruction without 3D supervision. In CVPR, 2016. 1
[29] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. LIFT: learned invariant feature transform. In ECCV, pages 467-483, 2016. 2
Citation statistics
Cited Times:16[WOS]   [WOS Record]     [Related Records in WOS]
Document TypeConference paper
CollectionShenzhen Research Institute of Big Data
School of Science and Engineering
Corresponding AuthorJia, Kui
1.School of Electronic and Information Engineering, South China University of Technology, China
2.University of Technology Sydney, Australia
3.Shenzhen Research Institute of Big Data, Chinese University of Hong Kong (Shenzhen), China
Recommended Citation
GB/T 7714
Pan, Junyi,Li, Jun,Han, Xiaoguanget al. Residual MeshNet: Learning to deform meshes for single-view 3D reconstruction[C],2018.
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[Pan, Junyi]'s Articles
[Li, Jun]'s Articles
[Han, Xiaoguang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Pan, Junyi]'s Articles
[Li, Jun]'s Articles
[Han, Xiaoguang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Pan, Junyi]'s Articles
[Li, Jun]'s Articles
[Han, Xiaoguang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.