<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>


        WavLM-Base

        Microsoft’s WavLM
        The base model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz.
        Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.
        The model was pre-trained on 960h of Librispeech.
        Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
        Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
        Abstract
        Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
        The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.


        Usage

        This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
        used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the SUPERB benchmark.
        Note: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
        of phonemes before fine-tuning.


        Speech Recognition

        To fine-tune the model for speech recognition, see the official speech recognition example.


        Speech Classification

        To fine-tune the model for speech classification, see the official audio classification example.


        Speaker Verification

        TODO


        Speaker Diarization

        TODO


        Contribution

        The model was contributed by cywang and patrickvonplaten.


        License

        The official license can be found here

        microsoft/wavlm-base

        數(shù)據(jù)評(píng)估

        microsoft/wavlm-base瀏覽人數(shù)已經(jīng)達(dá)到651,如你需要查詢?cè)撜镜南嚓P(guān)權(quán)重信息,可以點(diǎn)擊"5118數(shù)據(jù)""愛站數(shù)據(jù)""Chinaz數(shù)據(jù)"進(jìn)入;以目前的網(wǎng)站數(shù)據(jù)參考,建議大家請(qǐng)以愛站數(shù)據(jù)為準(zhǔn),更多網(wǎng)站價(jià)值評(píng)估因素如:microsoft/wavlm-base的訪問速度、搜索引擎收錄以及索引量、用戶體驗(yàn)等;當(dāng)然要評(píng)估一個(gè)站的價(jià)值,最主要還是需要根據(jù)您自身的需求以及需要,一些確切的數(shù)據(jù)則需要找microsoft/wavlm-base的站長(zhǎng)進(jìn)行洽談提供。如該站的IP、PV、跳出率等!

        關(guān)于microsoft/wavlm-base特別聲明

        本站OpenI提供的microsoft/wavlm-base都來源于網(wǎng)絡(luò),不保證外部鏈接的準(zhǔn)確性和完整性,同時(shí),對(duì)于該外部鏈接的指向,不由OpenI實(shí)際控制,在2023年 5月 26日 下午6:01收錄時(shí),該網(wǎng)頁上的內(nèi)容,都屬于合規(guī)合法,后期網(wǎng)頁的內(nèi)容如出現(xiàn)違規(guī),可以直接聯(lián)系網(wǎng)站管理員進(jìn)行刪除,OpenI不承擔(dān)任何責(zé)任。

        相關(guān)導(dǎo)航

        蟬鏡AI數(shù)字人

        暫無評(píng)論

        暫無評(píng)論...
        主站蜘蛛池模板: 18女人毛片水真多免费| 一级毛片免费播放视频| 亚洲黄色免费观看| 亚洲国产精品久久66| 无码囯产精品一区二区免费| 久久亚洲国产成人精品性色| 四虎影视在线影院在线观看免费视频 | 亚洲成人免费网站| 亚洲成年人电影网站| 我们的2018在线观看免费高清| 亚洲一区精品视频在线| 日韩av无码成人无码免费| 亚洲熟妇无码一区二区三区| 国内精品免费视频自在线| 亚洲av永久无码一区二区三区| 色吊丝最新永久免费观看网站| 男男黄GAY片免费网站WWW| 国产a v无码专区亚洲av| 可以免费观看的毛片| 亚洲一区二区影视| 国产精品国产午夜免费福利看| 一级特级aaaa毛片免费观看| baoyu122.永久免费视频| 亚洲综合区小说区激情区 | 成年女人永久免费观看片| 亚洲人成在线精品| 国产免费丝袜调教视频| 亚洲精品国产高清在线观看| 免费国产美女爽到喷出水来视频| 一区二区在线视频免费观看| 亚洲第一精品在线视频| 亚洲成a人在线看天堂无码| 久久国产乱子伦精品免费不卡| 亚洲免费在线视频观看| 国产成人3p视频免费观看| 一级毛片免费在线| 亚洲精品视频在线播放| www.亚洲色图.com| 99re6在线视频精品免费下载| 亚洲av永久中文无码精品| 久久久亚洲精品国产|