国产亚洲AV自拍|av中文字幕一区|资源在线观看一区二区|亚洲影视久久亚洲特级性交|一级做一级a做片爱免费观看|欧美另类亚洲色婷婷精品无码|亚洲青青草免费一区|青青草免费成人网|91久久国内视频|五月天丁香久久

實驗室論文獲NAACL頂會錄用
來源: 郝天永/
華南師范大學(xué)
2683
21
0
2023-01-25

實驗室論文獲NLP領(lǐng)域的5大頂會之一NAACL錄用,第1作者為研二朱曉智同學(xué),第1作者和通訊作者單位均為華南師范大學(xué)。

Title: A Self-supervised Joint Training Framework for Document Reranking

Author: Xiaozhi Zhu, Tianyong Hao, Sijie Cheng, Fu Lee Wang, Hai Liu

Abstract: Pretrained language models such as BERT have been successfully applied to a wide range of natural language processing tasks and also achieved impressive performance in document reranking tasks. Recent works indicate that further pretraining the language models on the task-specific datasets before fine-tuning helps improve reranking performance. However, the pre-training tasks like masked language model and next sentence prediction were based on the context of documents instead of encouraging the model to understand the content of queries in document reranking task. In this paper, we propose a new self-supervised joint training framework (SJTF) with a self-supervised method called Masked Query Prediction (MQP) to establish semantic relations between given queries and positive documents. The framework randomly masks a token of query and encodes the masked query paired with positive documents, and uses a linear layer as a decoder to predict the masked token. In addition, the MQP is used to jointly optimize the models with supervised ranking objective during fine-tuning stage without an extra further pre-training stage. Extensive experiments on the MS MARCO passage ranking and TREC Robust datasets show that models trained with our framework obtain significant improvements compared to original models.

錄用類型:Long paper


登錄用戶可以查看和發(fā)表評論, 請前往  登錄 或  注冊。
SCHOLAT.com 學(xué)者網(wǎng)
免責(zé)聲明 | 關(guān)于我們 | 聯(lián)系我們
聯(lián)系我們: