Lili Mou’s research interests include deep learning applied to natural language processing as well as programming language processing.
A common language
Lili Mou’s research interests include deep learning applied to natural language processing as well as programming language processing. He seeks to build an intelligent system that can understand and interact with humans via natural language, involving both text understanding and text generation. He focuses on fundamental problems in machine learning – especially deep learning – methods applied to natural language processing. Lili has published papers on the topics of the transferability of neural networks in NLP applications, on unsupervised paraphrasing by simulated annealing, and on discrete optimization for unsupervised sentence summarization with word-level extraction. His work has been successfully applied to information extraction, sentiment analysis, semantic parsing, syntactic parsing, dialogue systems, paraphrase generation, grammatical error correction and others.
Lili is a Fellow and Canada CIFAR AI Chair at Amii and an Assistant Professor in the Department of Computing Science at the University of Alberta. Lili received his Ph.D. from the School of Electronic Engineering and Computer Science at Peking University where he received a number of scholarship and research awards – including a Distinguished Ph.D. Thesis award. He has published more than 30 papers at top-tier conferences and journals, including AAAI, ACL, CIKM, COLING, EMNLP, ICASSP, ICML, IJCAI, INTERSPEECH, NAACL-HLT, and TACL – as well as a monograph with Springer. He has delivered talks at institutions such as Queen’s University, the University of Waterloo and the Hong Kong Polytechnic University and at international venues, including the Annual Conference of the Association for Computational Linguistics.