Machine learning techniques have made a lot of progress recently.
Nevertheless, the current machine learning techniques only support
limited learning protocols. Since there are only restricted ways to
teach machines, we cannot transfer human knowledge to machines
On the other hand, structured tasks, which involve many
interdependent decisions for a given example, are expensive to label.
Given that almost all natural language processing tasks are
structured tasks, it is important to have learning frameworks that
uses resources in addition to labeled examples.
This talk addresses the problem of reducing the labeling cost for
structural tasks. We develop advanced machine learning algorithms
that take advantage of indirect supervision together with labeled
data. Indirect supervision can come in the form of constraints or
weaker supervision signals. Our proposed learning frameworks can
handle both structured output problems and problems with latent
structures. We demonstrate the effectiveness of our indirect
supervision frameworks for various natural language processing
Ming-Wei Chang is a Ph.D. candidate in University of Illinois at
Urbana-Champaign. His research interest is in machine learning and
its applications to natural language processing.
Ming-Wei has published several papers on leading machine learning
and natural language processing conferences, including ICML, ACL,
EMNLP, NAACL, KDD and AAAI. In 2009 and 2010, he co-presented
tutorials on combining human knowledge with statistical models in
EACL and NAACL, respectively. Before coming to the United States,
Ming-Wei joined several projects on support vector machines with
Chih-Jen Lin in National Taiwan University. Together with Chih-Jen
Lin, he won the first place in two international machine learning