Skip to content

Commit e31e1f7

Browse files
authored
Patch 1 - Update README file for UniMS work. (#92)
* Update README.md Update README file for UniMS work. * Update README.md Add paper link for the UniMS
1 parent 5baca8c commit e31e1f7

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

NLP/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ This repository provides some of the NLP techniques developed by Huawei Noah's A
88
* [XeroAlign](https://github.com/huawei-noah/noah-research/tree/master/xero_align) allows for efficient SOTA zero-shot cross-lingual transfer with machine translated pairs via a simple and lightweight auxiliary loss, originally published in [ACL Findings 2021](https://aclanthology.org/2021.findings-acl.32/).
99
* [CrossAligner](https://github.com/huawei-noah/noah-research/tree/master/NLP/cross_aligner) is an extension of XeroAlign (above) with a more effective NER (slot tagging) alignment based on machine translated pairs, new labels/objective derived from English labels and a SOTA weighted combination of losses. Additional analysis in the appendix, please read our [ACL Findings 2022](https://arxiv.org/abs/2203.09982v1) paper).
1010
* [DyLex](https://github.com/huawei-noah/noah-research/tree/master/NLP/dylex)
11+
* [UniMS](https://github.com/huawei-noah/noah-research/tree/master/NLP/UniMS) a unified multimodal summarization framework with an encoder-decoder multitask architecture
12+
on top of BART, which simultaneously outputs extractive and abstractive summaries, and image selection results. Our framework adopts knowledge distillation to improve
13+
image selection without any requirement on the existence and quality of image captions. We further introduce the extractive objective in the encoder and visual guided attention in the decoder to better integrate both textual and visual modalities in the conditional text generation. Our unified method achieves a new state-of-the-art result of multimodal summarization, and more details can be found in the [AAAI 2022](https://www.aaai.org/AAAI22Papers/AAAI-5436.ZhangZ.pdf) paper.
1114
* [SumTitles](https://github.com/huawei-noah/noah-research/tree/master/SumTitles)
1215
* [Conversation Graph](https://github.com/huawei-noah/noah-research/tree/master/conv_graph) allows for effective data augmentation, training loss 'augmentation' and a fairer evaluation of dialogue mamagement in a modular conversational agent. We introduce a novel idea of a convgraph to achieve all that. Read more in our [TACL 2021](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00352/97777/Conversation-Graph-Data-Augmentation-Training-and) paper.
1316
* [FreeGBDT](https://github.com/huawei-noah/noah-research/tree/master/freegbdt) investigates whether it is feasible (or superior) to replace the conventional MLP classifier head used with pretrained transformers with a gradient-boosted decision tree. Want to know if it worked? Take a look at the [ACL Findings 2021](https://aclanthology.org/2021.findings-acl.26.pdf) paper!

0 commit comments

Comments
 (0)