<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>


        Model card for CLAP

        Model card for CLAP: Contrastive Language-Audio Pretraining

        laion/clap-htsat-unfused


        Table of Contents

        1. TL;DR
        2. Model Details
        3. Usage
        4. Uses
        5. Citation


        TL;DR

        The abstract of the paper states that:

        Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models’ results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.


        Usage

        You can use this model for zero shot audio classification or extracting audio and/or textual features.


        Uses


        Perform zero-shot audio classification


        Using pipeline

        from datasets import load_dataset
        from transformers import pipeline
        dataset = load_dataset("ashraq/esc50")
        audio = dataset["train"]["audio"][-1]["array"]
        audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
        output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
        print(output)
        >>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]


        Run the model:

        You can also get the audio and text embeddings using ClapModel


        Run the model on CPU:

        from datasets import load_dataset
        from transformers import ClapModel, ClapProcessor
        librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
        audio_sample = librispeech_dummy[0]
        model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
        processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
        inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
        audio_embed = model.get_audio_features(**inputs)


        Run the model on GPU:

        from datasets import load_dataset
        from transformers import ClapModel, ClapProcessor
        librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
        audio_sample = librispeech_dummy[0]
        model = ClapModel.from_pretrained("laion/clap-htsat-unfused").to(0)
        processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
        inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
        audio_embed = model.get_audio_features(**inputs)


        Citation

        If you are using this model for your work, please consider citing the original paper:
        @misc{https://doi.org/10.48550/arxiv.2211.06687,
        doi = {10.48550/ARXIV.2211.06687},
        url = {https://arxiv.org/abs/2211.06687},
        author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
        keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
        title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
        publisher = {arXiv},
        year = {2022},
        copyright = {Creative Commons Attribution 4.0 International}
        }

        數據評估

        laion/clap-htsat-unfused瀏覽人數已經達到706,如你需要查詢該站的相關權重信息,可以點擊"5118數據""愛站數據""Chinaz數據"進入;以目前的網站數據參考,建議大家請以愛站數據為準,更多網站價值評估因素如:laion/clap-htsat-unfused的訪問速度、搜索引擎收錄以及索引量、用戶體驗等;當然要評估一個站的價值,最主要還是需要根據您自身的需求以及需要,一些確切的數據則需要找laion/clap-htsat-unfused的站長進行洽談提供。如該站的IP、PV、跳出率等!

        關于laion/clap-htsat-unfused特別聲明

        本站OpenI提供的laion/clap-htsat-unfused都來源于網絡,不保證外部鏈接的準確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 5月 26日 下午5:55收錄時,該網頁上的內容,都屬于合規合法,后期網頁的內容如出現違規,可以直接聯系網站管理員進行刪除,OpenI不承擔任何責任。

        相關導航

        蟬鏡AI數字人

        暫無評論

        暫無評論...
        主站蜘蛛池模板: 久久青青草原亚洲av无码| 岛国大片免费在线观看| 国产成人亚洲影院在线观看| 狠狠综合亚洲综合亚洲色| 免费鲁丝片一级观看| 亚洲国产精品成人AV在线 | a毛片视频免费观看影院| 亚洲中文字幕伊人久久无码| 免费夜色污私人影院网站电影| 国产伦精品一区二区三区免费下载 | 亚洲国产精品lv| 免费成人在线视频观看| 久久久久亚洲AV无码专区首| 日韩免费高清大片在线| 99麻豆久久久国产精品免费| 亚洲av区一区二区三| 亚洲乱码精品久久久久..| 三上悠亚电影全集免费| 成年男女免费视频网站| 亚洲成av人片在www鸭子| 国产免费拔擦拔擦8x| 一区二区三区在线免费| 久久精品国产亚洲| 18女人毛片水真多免费| 亚洲最大天堂无码精品区| 国产精品二区三区免费播放心 | 四虎在线免费视频| 亚洲av乱码一区二区三区| 日韩精品免费电影| 9久热精品免费观看视频| 亚洲精品高清国产麻豆专区| 国产精品免费久久久久影院| 亚洲VA中文字幕无码一二三区| 最好看的中文字幕2019免费| 亚洲欧洲AV无码专区| 自拍偷自拍亚洲精品第1页| 99免费观看视频| 小说区亚洲自拍另类| 亚洲AV成人片色在线观看高潮| 成人男女网18免费视频| 伊人久久大香线蕉免费视频|