My research interests include natural language processing (NLP), natural language understanding (NLU), machine learning and machine translation. I am fascinated by the idea of giving computers the ability to understand human language. I like to develop formal, computational models from a statistical perspective, but also enjoy making greater use of linguistic knowledge in the model design.
I am working the following research topics:
Deep Learning on NLP: In recent years, deep learning framework proves to be effective on many NLP applications. One fundamental research topic of deep learning on NLP is how to learn lexical knowledge from a huge of unlabeled data, called word embedding. A drawback of this unsupervised leaning is that it did not fully utilize the prior knowledge bases that people already constructed. And it is also hard to clearly interpret the meaning of each word embedding. Therefore, we aim to improve this conventional context-based word embedding process by the incorporation with prior knowledge bases from semantic resources, such as WordNet or E-HowNet, and also from some open domain knowledge bases, such as Freebase and Wikibase. Our goal is that the learned word embeddings are more suitable for reasoning over relationships between entities and their meaning can also be clearly interpreted.
Deep Natural Language Understanding: In this era of big data, various text-based contents ranging from news and social media have been pervasive on the web. People write their reports, experiences and opinions on the web and share them with the public or each other. To better understand the tremendous, formal and informal written materials on the web, a robust semantic parser is required, which will be based on the incorporation of our existing analysis components i.e, word segmenter, part-of-speech tagger, syntactic parser, semantic role labeler and anew development of a semantic form converter, a word sense disambugular and an anaphora/co-reference resolver. We shall follow and mildly extend E-HowNet''s semantic representation to express sentences'' semantics as the output of our semantic parser.
Knowledge Acquisition from the Web: The Internet contains huge amount of knowledge. Our goal is to automatically acquire common sense or domain knowledge from web by using our semantic parser as a tool to mine knowledge from the text-based content on web. The knowledge is represented in the form of relation between objects, such as the AgentPredicateRelation of human and eat, or TeamMemberRelation of Laker and Jimmy Lin. We will utilize E-HowNet, a lexical semantic representation model, to represent such relations.
Multi-Engine Machine Translation: Given the wide range of successful statistical MT approaches that have emerged recently, including phrase-based MT, hierarchical phrase-based MT and syntax-oriented MT, it would be beneficial to take advantage of their individual strengths and avoid their individual weaknesses. MEMT attempts to do so by either fusing the output of multiple translation engines or selecting the best one among them, aiming to improve the overall translation quality. We propose a paraphrasing model to address the task of MEMT. We dynamically learn hierarchical paraphrases from target hypotheses and form a synchronous context-free grammar to guide a series of transformations of target hypotheses into fused translations. The model is able to exploit phrasal and structural system-weighted consensus and also to utilize existing information about word ordering present in the target hypotheses.