Details of Research Outputs

TitleSparse Lifting of Dense Vectors: A Unified Approach to Word and Sentence Representations
Author (Name in English or Pinyin)
Hao, S.1; Li, W.1,2
Date Issued2020
Source PublicationCommunications in Computer and Information Science
ISSN18650929
DOI10.1007/978-3-030-63820-7_82
Indexed BySCOPUS
Education discipline科技类
Published range国外学术期刊
Volume Issue Pages卷: 1332 页: 717-725
References
[1] Arora, S., Liang, Y., Ma, T.: A simple but tough-to-beat baseline for sentence embeddings. In: ICLR 2017 (2017)
[2] Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)
[3] Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27 (2011)
[4] Dasgupta, S., Stevens, C., Navlakha, S.: A neural algorithm for a fundamental computing problem. Science 358(6364), 793–796 (2017)
[5] Ding, C., He, X., Simon, H.: On the equivalence of nonnegative matrix factorization and spectral clustering. In: SIAM SDM 2005, pp. 606–610 (2005)
[6] Dumais, S.: Latent semantic analysis. Ann. Rev. Inf. Sci. Technol. 38(1), 188–230 (2004)
[7] Faruqui, M., Tsvetkov, Y., Yogatama, D., Dyer, C., Smith, N.: Sparse overcomplete word vector representations. arXiv:1506.02004 (2015)
[8] Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.: Convolutional sequence to sequence learning. arXiv:1705.03122 (2017)
[9] Hu, M., Liu, B.: Mining and summarizing customer reviews. In: ACM SIGKDD 2004, pp. 168–177 (2004)
[10] Joachims, T.: Text categorization with support vector machines: learning with many relevant features. In: ECML 1998, pp. 137–142 (1998)
[11] Kim, Y.: Convolutional neural networks for sentence classification. arXiv:1408.5882 (2014)
[12] Kuang, D., Yun, S., Park, H.: Symnmf: nonnegative low-rank approximation of a similarity matrix for graph clustering. J. Global Optim. 62(3), 545–574 (2015)
[13] Kusner, M., Sun, Y., Kolkin, N., Weinberger, K.: From word embeddings to document distances. In: ICML 2015, pp. 957–966 (2015)
[14] Lebret, R., Collobert, R.: Word emdeddings through hellinger PCA. arXiv:1312.5542 (2013)
[15] Lee, D., Seung, H.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788 (1999)
[16] Lee, D., Seung, H.: Algorithms for non-negative matrix factorization. In: NIPS 2001, pp. 556–562 (2001)
[17] Li, W.: Modeling winner-take-all competition in sparse binary projections. In: ECML-PKDD 2020 (2020)
[18] Li, W., Mao, J., Zhang, Y., Cui, S.: Fast similarity search via optimal sparse lifting. In: NeurIPS 2018, pp. 176–184 (2018)
[19] Li, X., Roth, D.: Learning question classifiers. In: COLING 2002, pp. 1–7 (2002)
[20] Ma, C., Gu, C., Li, W., Cui, S.: Large-scale image retrieval with sparse binary projections. In: ACM SIGIR 2020, pp. 1817–1820 (2020)
[21] McDonald, S., Ramscar, M.: Testing the distributional hypothesis: the influence of context on judgements of semantic similarity. In: CogSci 2001 (2001)
[22] Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv:1301.3781 (2013)
[23] Murphy, B., Talukdar, P., Mitchell, T.: Learning effective and interpretable semantic models using non-negative sparse embedding. In: COLING 2012, pp. 1933–1950 (2012)
[24] Pang, B., Lee, L.: Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In: ACL 2005, pp. 115–124 (2005)
[25] Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: EMNLP 2014, pp. 1532–1543 (2014)
[26] Salton, G., McGill, M.: Introduction to Modern Information Retrieval. McGraw-Hill Inc., New York (1986)
[27] Subramanian, A., Pruthi, D., Jhamtani, H., Berg-Kirkpatrick, T., Hovy, E.: Spine: Sparse interpretable neural embeddings. In: AAAI 2018, pp. 4921–4928 (2018)
[28] Sun, F., Guo, J., Lan, Y., Xu, J., Cheng, X.: Sparse word embeddings using l1 regularized online learning. In: IJCAI 2016, pp. 2915–2921 (2016)
[29] Turney, P.: Leveraging term banks for answering complex questions: a case for sparse vectors. arXiv:1704.03543 (2017)
[30] Wiebe, J., Wilson, T., Cardie, C.: Annotating expressions of opinions and emotions in language. Lang. Resour. Eval. 39(2–3), 165–210 (2005)
[31] Yang, J., Jiang, Y., Hauptmann, A., Ngo, C.: Evaluating bag-of-visual-words representations in scene classification. In: ACM SIGMM MIR 2007, pp. 197–206 (2007)
[32] Yogatama, D., Faruqui, M., Dyer, C., Smith, N.: Learning word representations with hierarchical sparse coding. In: ICML 2015, pp. 87–96 (2015)
Citation statistics
Cited Times [WOS]:0   [WOS Record]     [Related Records in WOS]
Document TypeJournal article
Identifierhttps://irepository.cuhk.edu.cn/handle/3EPUXD0A/1753
CollectionSchool of Data Science
Corresponding AuthorLi, W.
Affiliation
1.The Chinese University of Hong Kong, Shenzhen, China
2.Shenzhen Research Institute of Big Data, Shenzhen, China
Corresponding Author AffilicationShenzhen Research Institute of Big Data
Recommended Citation
GB/T 7714
Hao, S.,Li, W. Sparse Lifting of Dense Vectors: A Unified Approach to Word and Sentence Representations[J]. Communications in Computer and Information Science,2020.
APA Hao, S., & Li, W. (2020). Sparse Lifting of Dense Vectors: A Unified Approach to Word and Sentence Representations. Communications in Computer and Information Science.
MLA Hao, S.,et al."Sparse Lifting of Dense Vectors: A Unified Approach to Word and Sentence Representations".Communications in Computer and Information Science (2020).
Files in This Item:
There are no files associated with this item.
Related Services
Usage statistics
Google Scholar
Similar articles in Google Scholar
[Hao, S.]'s Articles
[Li, W.]'s Articles
Baidu academic
Similar articles in Baidu academic
[Hao, S.]'s Articles
[Li, W.]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Hao, S.]'s Articles
[Li, W.]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.