sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco
DistilBert for Dense Passage Retrieval trained with Balanced Topic Aware Sampling (TAS-B)
We provide a retrieval trained DistilBert-based model (we call the dual-encoder then dot-product scoring architecture BERT_Dot) trained with Balanced Topic Aware Sampling on MSMARCO-Passage.
This instance was trained with a batch size of 256 and can be used to re-rank a candidate set or directly for a vector index based dense retrieval. The architecture is a 6-layer DistilBERT, without architecture additions or modifications (we only change the weights during training) – to receive a query/passage representation we pool the CLS vector. We use the same BERT layers for both query and passage encoding (yields better results, and lowers memory requirements).
If you want to know more about our efficient (can be done on a single consumer GPU in 48 hours) batch composition procedure and dual supervision for dense retrieval training, check out our paper: https://arxiv.org/abs/2104.06967 ?
For more information and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/tas-balanced-dense-retrieval
Effectiveness on MSMARCO Passage & TREC-DL’19
We trained our model on the MSMARCO standard (“small”-400K query) training triples re-sampled with our TAS-B method. As teacher models we used the BERT_CAT pairwise scores as well as the ColBERT model for in-batch-negative signals published here: https://github.com/sebastian-hofstaetter/neural-ranking-kd
MSMARCO-DEV (7K)
| MRR@10 | NDCG@10 | Recall@1K | |
|---|---|---|---|
| BM25 | .194 | .241 | .857 |
| TAS-B BERT_Dot (Retrieval) | .347 | .410 | .978 |
數(shù)據(jù)評(píng)估
本站OpenI提供的sebastian-hofstaetter/distilbert-dot-tas_b-b256-msmarco都來源于網(wǎng)絡(luò),不保證外部鏈接的準(zhǔn)確性和完整性,同時(shí),對(duì)于該外部鏈接的指向,不由OpenI實(shí)際控制,在2023年 5月 26日 下午5:55收錄時(shí),該網(wǎng)頁上的內(nèi)容,都屬于合規(guī)合法,后期網(wǎng)頁的內(nèi)容如出現(xiàn)違規(guī),可以直接聯(lián)系網(wǎng)站管理員進(jìn)行刪除,OpenI不承擔(dān)任何責(zé)任。



粵公網(wǎng)安備 44011502001135號(hào)