<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>

        deepset/gbert-base-germandpr-question_encoder


        Overview

        Language model: gbert-base-germandpr
        Language: German
        Training data: GermanDPR train set (~ 56MB)
        Eval data: GermanDPR test set (~ 6MB)
        Infrastructure: 4x V100 GPU
        Published: Apr 26th, 2021


        Details

        • We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
        • The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published online.
        • It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
          For each pair, there are one positive context and three hard negative contexts.
        • As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
        • The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.

        See https://deepset.ai/germanquad for more details and dataset download.


        Hyperparameters

        batch_size = 40
        n_epochs = 20
        num_training_steps = 4640
        num_warmup_steps = 460
        max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder
        learning_rate = 1e-6
        lr_schedule = LinearWarmup
        embeds_dropout_prob = 0.1
        num_hard_negatives = 2


        Performance

        During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
        The dev split contained 1030 question/answer pairs.
        Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
        Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
        After fixing the hyperparameters we trained the model on the full GermanDPR train set.
        We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.

        deepset/gbert-base-germandpr-question_encoder


        Usage


        In haystack

        You can load the model in haystack as a retriever for doing QA at scale:
        retriever = DensePassageRetriever(
        document_store=document_store,
        query_embedding_model="deepset/gbert-base-germandpr-question_encoder"
        passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder"
        )


        Authors

        • Timo M?ller: timo.moeller [at] deepset.ai
        • Julian Risch: julian.risch [at] deepset.ai
        • Malte Pietsch: malte.pietsch [at] deepset.ai


        About us

        deepset/gbert-base-germandpr-question_encoder
        We bring NLP to the industry via open source!
        Our focus: Industry specific language models & large scale QA systems.
        Some of our work:

        • German BERT (aka “bert-base-german-cased”)
        • GermanQuAD and GermanDPR datasets and models (aka “gelectra-base-germanquad”, “gbert-base-germandpr”)
        • FARM
        • Haystack

        Get in touch:
        Twitter | LinkedIn | Website
        By the way: we’re hiring!

        數據統計

        數據評估

        deepset/gbert-base-germandpr-question_encoder瀏覽人數已經達到536,如你需要查詢該站的相關權重信息,可以點擊"5118數據""愛站數據""Chinaz數據"進入;以目前的網站數據參考,建議大家請以愛站數據為準,更多網站價值評估因素如:deepset/gbert-base-germandpr-question_encoder的訪問速度、搜索引擎收錄以及索引量、用戶體驗等;當然要評估一個站的價值,最主要還是需要根據您自身的需求以及需要,一些確切的數據則需要找deepset/gbert-base-germandpr-question_encoder的站長進行洽談提供。如該站的IP、PV、跳出率等!

        關于deepset/gbert-base-germandpr-question_encoder特別聲明

        本站OpenI提供的deepset/gbert-base-germandpr-question_encoder都來源于網絡,不保證外部鏈接的準確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 6月 6日 下午2:57收錄時,該網頁上的內容,都屬于合規合法,后期網頁的內容如出現違規,可以直接聯系網站管理員進行刪除,OpenI不承擔任何責任。

        相關導航

        Trae官網

        暫無評論

        暫無評論...
        主站蜘蛛池模板: 色天使亚洲综合在线观看| 亚洲黄色免费观看| 亚洲爆乳精品无码一区二区| 永久免费视频网站在线观看| 久久精品国产亚洲av日韩| 一区二区免费视频| 中文字幕亚洲精品| 国产成在线观看免费视频| 77777午夜亚洲| 国产麻豆免费观看91| 小说区亚洲自拍另类| www.亚洲色图.com| 国产JIZZ中国JIZZ免费看| 国产国拍亚洲精品mv在线观看 | 亚洲日本在线电影| 国产精品美女自在线观看免费 | 99爱在线精品视频免费观看9| 亚洲AV人人澡人人爽人人夜夜| 日韩免费高清大片在线| 亚洲综合亚洲国产尤物| 91嫩草国产在线观看免费| 国产精品日本亚洲777| 久久久久噜噜噜亚洲熟女综合| 在线观看片免费人成视频播放| 亚洲视频在线一区二区三区| 免费看韩国黄a片在线观看| 国产av无码专区亚洲av毛片搜 | 在线观看视频免费完整版| 亚洲AV一区二区三区四区| 精品国产日韩亚洲一区| 污污网站免费观看| 一本色道久久88—综合亚洲精品| 四虎影视精品永久免费网站| 成人精品一区二区三区不卡免费看| 亚洲成在人线中文字幕| 日产国产精品亚洲系列| 麻豆成人久久精品二区三区免费| 亚洲午夜无码久久久久软件| 337p日本欧洲亚洲大胆裸体艺术 | 久久99国产综合精品免费| 亚洲狠狠色丁香婷婷综合|