From cwhsu於iis.sinica.edu.tw Tue Oct 5 15:55:39 2021 From: cwhsu於iis.sinica.edu.tw (=?utf-8?B?6Kix5Zas54K6?=) Date: Tue, 5 Oct 2021 15:55:39 +0800 (CST) Subject: [Most-ai-contest] Cross-View Training Message-ID: <961144908.11934592.1633420539842.JavaMail.zimbra@mail.iis.sinica.edu.tw> Dear all, You can check Cross-View Training in the following papers. 1. For NER, Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning [ https://aclanthology.org/D18-1217.pdf | https://aclanthology.org/D18-1217.pdf ] (Alibaba) <= This applies to transformer-based model, which I think is quite helpful and provides an easy way to do cross-view training on BERT-like models. 2. For Sequence Labeling, e.g., NER, Semi-Supervised Sequence Modeling with Cross-View Training [ https://arxiv.org/pdf/2105.03654.pdf | https://aclanthology.org/D18-1217.pdf ] (Clark, Le Quoc, Chris Manning) 3. A Survey on Multi-view Learning: [ https://arxiv.org/abs/1304.5634 | https://arxiv.org/abs/1304.5634 ] FYI, Chiao-Wei -------------- 下一部份 -------------- 抹去了一個 HTML 附加檔... URL: From cwhsu於iis.sinica.edu.tw Tue Oct 5 16:11:21 2021 From: cwhsu於iis.sinica.edu.tw (=?utf-8?B?6Kix5Zas54K6?=) Date: Tue, 5 Oct 2021 16:11:21 +0800 (CST) Subject: [Most-ai-contest] Cross-View Training In-Reply-To: <961144908.11934592.1633420539842.JavaMail.zimbra@mail.iis.sinica.edu.tw> References: <961144908.11934592.1633420539842.JavaMail.zimbra@mail.iis.sinica.edu.tw> Message-ID: <1616793580.11937027.1633421481066.JavaMail.zimbra@mail.iis.sinica.edu.tw> For Cross-Lingual Machine Translation, Multi-View Cross-Lingual Structured Prediction with Minimum Supervision (ACL2021) [ http://faculty.sist.shanghaitech.edu.cn/faculty/tukw/acl21mv.pdf | http://faculty.sist.shanghaitech.edu.cn/faculty/tukw/acl21mv.pdf ] From: "許喬為" To: "Most-ai-contest" Sent: Tuesday, October 5, 2021 3:55:39 PM Subject: Cross-View Training Dear all, You can check Cross-View Training in the following papers. 1. For NER, Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning [ https://aclanthology.org/D18-1217.pdf | https://aclanthology.org/D18-1217.pdf ] (Alibaba) <= This applies to transformer-based model, which I think is quite helpful and provides an easy way to do cross-view training on BERT-like models. 2. For Sequence Labeling, e.g., NER, Semi-Supervised Sequence Modeling with Cross-View Training [ https://arxiv.org/pdf/2105.03654.pdf | https://aclanthology.org/D18-1217.pdf ] (Clark, Le Quoc, Chris Manning) 3. A Survey on Multi-view Learning: [ https://arxiv.org/abs/1304.5634 | https://arxiv.org/abs/1304.5634 ] FYI, Chiao-Wei -------------- 下一部份 -------------- 抹去了一個 HTML 附加檔... URL: