<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>


        WavLM-Base-Plus

        Microsoft’s WavLM
        The base model pretrained on 16kHz sampled speech audio. When using the model, make sure that your speech input is also sampled at 16kHz.
        Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.
        The model was pre-trained on:

        • 60,000 hours of Libri-Light
        • 10,000 hours of GigaSpeech
        • 24,000 hours of VoxPopuli

        Paper: WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
        Authors: Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei
        Abstract
        Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where additional overlapped utterances are created unsupervisely and incorporated during model training. Lastly, we scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
        The original model can be found under https://github.com/microsoft/unilm/tree/master/wavlm.


        Usage

        This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
        used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on the SUPERB benchmark.
        Note: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
        of phonemes before fine-tuning.


        Speech Recognition

        To fine-tune the model for speech recognition, see the official speech recognition example.


        Speech Classification

        To fine-tune the model for speech classification, see the official audio classification example.


        Speaker Verification

        TODO


        Speaker Diarization

        TODO


        Contribution

        The model was contributed by cywang and patrickvonplaten.


        License

        The official license can be found here

        microsoft/wavlm-base-plus

        數據評估

        microsoft/wavlm-base-plus瀏覽人數已經達到641,如你需要查詢該站的相關權重信息,可以點擊"5118數據""愛站數據""Chinaz數據"進入;以目前的網站數據參考,建議大家請以愛站數據為準,更多網站價值評估因素如:microsoft/wavlm-base-plus的訪問速度、搜索引擎收錄以及索引量、用戶體驗等;當然要評估一個站的價值,最主要還是需要根據您自身的需求以及需要,一些確切的數據則需要找microsoft/wavlm-base-plus的站長進行洽談提供。如該站的IP、PV、跳出率等!

        關于microsoft/wavlm-base-plus特別聲明

        本站OpenI提供的microsoft/wavlm-base-plus都來源于網絡,不保證外部鏈接的準確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 5月 26日 下午6:01收錄時,該網頁上的內容,都屬于合規合法,后期網頁的內容如出現違規,可以直接聯系網站管理員進行刪除,OpenI不承擔任何責任。

        相關導航

        蟬鏡AI數字人

        暫無評論

        暫無評論...
        主站蜘蛛池模板: 成年午夜视频免费观看视频| 亚洲情综合五月天| av片在线观看永久免费| 亚洲人成人一区二区三区| 曰批全过程免费视频播放网站| 亚洲成AV人片在WWW| 亚洲中文字幕无码日韩| 久久久高清免费视频| WWW国产成人免费观看视频| 亚洲欧洲日产国产最新| 亚洲成a人片在线播放| 亚洲一区二区三区免费视频| 视频一区在线免费观看| 亚洲第一网站免费视频| 亚洲av无码不卡私人影院| 天天影院成人免费观看| 91av免费在线视频| 亚洲综合久久精品无码色欲| 亚洲无av在线中文字幕| 永久黄网站色视频免费| 免费无码毛片一区二区APP| 色费女人18女人毛片免费视频| 亚洲国产精品xo在线观看| 中文字幕亚洲图片| 国产色婷婷精品免费视频| 84pao国产成视频免费播放| 国产精品成人免费观看| 亚洲乱妇老熟女爽到高潮的片| 亚洲AV人无码综合在线观看| 亚洲国产精品综合久久网络 | 蜜芽亚洲av无码一区二区三区 | 黄页网站在线视频免费| www.亚洲日本| 亚洲成色999久久网站| 久久亚洲2019中文字幕| 国产大片线上免费看| 毛片网站免费在线观看| 亚欧人成精品免费观看| 久久99热精品免费观看动漫| 国产精品小视频免费无限app| 朝桐光亚洲专区在线中文字幕|