<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>


        CANINE-s (CANINE pre-trained with subword loss)

        Pretrained CANINE model on 104 languages using a masked language modeling (MLM) objective. It was introduced in the paper CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation and first released in this repository.
        What’s special about CANINE is that it doesn’t require an explicit tokenizer (such as WordPiece or SentencePiece) as other models like BERT and RoBERTa. Instead, it directly operates at a character level: each character is turned into its Unicode code point.
        This means that input processing is trivial and can typically be accomplished as:
        input_ids = [ord(char) for char in text]

        The ord() function is part of Python, and turns each character into its Unicode code point.
        Disclaimer: The team releasing CANINE did not write a model card for this model so this model card has been written by the Hugging Face team.


        Model description

        CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:

        • Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-s) is trained with a subword loss, meaning that the model needs to predict the identities of subword tokens, while taking characters as input. By reading characters yet predicting subword tokens, the hard token boundary constraint found in other models such as BERT is turned into a soft inductive bias in CANINE.
        • Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.

        This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.


        Intended uses & limitations

        You can use the raw model for either masked language modeling or next sentence prediction, but it’s mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
        Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at models like GPT2.


        How to use

        Here is how to use this model:
        from transformers import CanineTokenizer, CanineModel
        model = CanineModel.from_pretrained('google/canine-s')
        tokenizer = CanineTokenizer.from_pretrained('google/canine-s')
        inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."]
        encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt")
        outputs = model(**encoding) # forward pass
        pooled_output = outputs.pooler_output
        sequence_output = outputs.last_hidden_state


        Training data

        The CANINE model was pretrained on on the multilingual Wikipedia data of mBERT, which includes 104 languages.


        BibTeX entry and citation info

        @article{DBLP:journals/corr/abs-2103-06874,
        author = {Jonathan H. Clark and
        Dan Garrette and
        Iulia Turc and
        John Wieting},
        title = {{CANINE:} Pre-training an Efficient Tokenization-Free Encoder for
        Language Representation},
        journal = {CoRR},
        volume = {abs/2103.06874},
        year = {2021},
        url = {https://arxiv.org/abs/2103.06874},
        archivePrefix = {arXiv},
        eprint = {2103.06874},
        timestamp = {Tue, 16 Mar 2021 11:26:59 +0100},
        biburl = {https://dblp.org/rec/journals/corr/abs-2103-06874.bib},
        bibsource = {dblp computer science bibliography, https://dblp.org}
        }

        數據評估

        google/canine-s瀏覽人數已經達到689,如你需要查詢該站的相關權重信息,可以點擊"5118數據""愛站數據""Chinaz數據"進入;以目前的網站數據參考,建議大家請以愛站數據為準,更多網站價值評估因素如:google/canine-s的訪問速度、搜索引擎收錄以及索引量、用戶體驗等;當然要評估一個站的價值,最主要還是需要根據您自身的需求以及需要,一些確切的數據則需要找google/canine-s的站長進行洽談提供。如該站的IP、PV、跳出率等!

        關于google/canine-s特別聲明

        本站OpenI提供的google/canine-s都來源于網絡,不保證外部鏈接的準確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 5月 26日 下午6:01收錄時,該網頁上的內容,都屬于合規合法,后期網頁的內容如出現違規,可以直接聯系網站管理員進行刪除,OpenI不承擔任何責任。

        相關導航

        蟬鏡AI數字人

        暫無評論

        暫無評論...
        主站蜘蛛池模板: 日韩色日韩视频亚洲网站| 狼人大香伊蕉国产WWW亚洲| 九九视频高清视频免费观看| 男人的天堂亚洲一区二区三区 | 久久精品国产亚洲AV天海翼 | 中文在线日本免费永久18近| 亚洲毛片av日韩av无码| 国产精品高清免费网站| 亚洲大尺度无码专区尤物| 69视频免费观看l| 亚洲91精品麻豆国产系列在线| 女性无套免费网站在线看| 久久亚洲色WWW成人欧美| 亚洲精品国产综合久久一线| 国产精品无码免费专区午夜| 亚洲av无码潮喷在线观看| 亚洲免费黄色网址| 亚洲人成色4444在线观看| 亚洲无码日韩精品第一页| a毛片在线免费观看| 亚洲视频一区二区在线观看| 成年丰满熟妇午夜免费视频| 色偷偷亚洲男人天堂| 亚洲最大激情中文字幕| 99久久国产免费-99久久国产免费| 亚洲码在线中文在线观看| 精品无码国产污污污免费| 亚欧洲精品在线视频免费观看 | 国外成人免费高清激情视频| 国产精品亚洲五月天高清| 日日噜噜噜噜夜夜爽亚洲精品| 91精品国产免费入口| 亚洲人成色777777精品| 亚洲国产另类久久久精品小说| 67194国产精品免费观看| MM1313亚洲精品无码久久| 国产亚洲一区二区精品| 在线视频观看免费视频18| 国产精品高清免费网站| 亚洲精品456人成在线| 亚洲美女又黄又爽在线观看|