<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>


        Model card for CLAP

        Model card for CLAP: Contrastive Language-Audio Pretraining

        laion/clap-htsat-fused


        Table of Contents

        1. TL;DR
        2. Model Details
        3. Usage
        4. Uses
        5. Citation


        TL;DR

        The abstract of the paper states that:

        Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models’ results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.


        Usage

        You can use this model for zero shot audio classification or extracting audio and/or textual features.


        Uses


        Perform zero-shot audio classification


        Using pipeline

        from datasets import load_dataset
        from transformers import pipeline
        dataset = load_dataset("ashraq/esc50")
        audio = dataset["train"]["audio"][-1]["array"]
        audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-fused")
        output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
        print(output)
        >>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]


        Run the model:

        You can also get the audio and text embeddings using ClapModel


        Run the model on CPU:

        from datasets import load_dataset
        from transformers import ClapModel, ClapProcessor
        librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
        audio_sample = librispeech_dummy[0]
        model = ClapModel.from_pretrained("laion/clap-htsat-fused")
        processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
        inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
        audio_embed = model.get_audio_features(**inputs)


        Run the model on GPU:

        from datasets import load_dataset
        from transformers import ClapModel, ClapProcessor
        librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
        audio_sample = librispeech_dummy[0]
        model = ClapModel.from_pretrained("laion/clap-htsat-fused").to(0)
        processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused")
        inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
        audio_embed = model.get_audio_features(**inputs)


        Citation

        If you are using this model for your work, please consider citing the original paper:
        @misc{https://doi.org/10.48550/arxiv.2211.06687,
        doi = {10.48550/ARXIV.2211.06687},
        url = {https://arxiv.org/abs/2211.06687},
        author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
        keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
        title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
        publisher = {arXiv},
        year = {2022},
        copyright = {Creative Commons Attribution 4.0 International}
        }

        數據評估

        laion/clap-htsat-fused瀏覽人數已經達到634,如你需要查詢該站的相關權重信息,可以點擊"5118數據""愛站數據""Chinaz數據"進入;以目前的網站數據參考,建議大家請以愛站數據為準,更多網站價值評估因素如:laion/clap-htsat-fused的訪問速度、搜索引擎收錄以及索引量、用戶體驗等;當然要評估一個站的價值,最主要還是需要根據您自身的需求以及需要,一些確切的數據則需要找laion/clap-htsat-fused的站長進行洽談提供。如該站的IP、PV、跳出率等!

        關于laion/clap-htsat-fused特別聲明

        本站OpenI提供的laion/clap-htsat-fused都來源于網絡,不保證外部鏈接的準確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 5月 26日 下午5:54收錄時,該網頁上的內容,都屬于合規合法,后期網頁的內容如出現違規,可以直接聯系網站管理員進行刪除,OpenI不承擔任何責任。

        相關導航

        蟬鏡AI數字人

        暫無評論

        暫無評論...
        主站蜘蛛池模板: a毛片全部播放免费视频完整18| 黄色网址大全免费| 久久99热精品免费观看牛牛| 中文字幕中韩乱码亚洲大片| 特级毛片全部免费播放| 亚洲国产午夜中文字幕精品黄网站| 久久久久久亚洲av无码蜜芽 | 免费A级毛片av无码| 久久伊人久久亚洲综合| 久久精品免费电影| 亚洲精品在线播放视频| 日本三级2019在线观看免费| 亚洲国产福利精品一区二区| 真人做A免费观看| 亚洲av无码专区在线观看下载| 国产成人3p视频免费观看| 猫咪免费人成网站在线观看入口| 国产啪亚洲国产精品无码| 免费黄网站在线看| 亚洲男人天堂影院| 成人永久免费高清| 亚洲阿v天堂在线2017免费| 亚洲AV午夜福利精品一区二区 | 国产精品亚洲一区二区在线观看 | 亚洲成AV人片在线观看WWW| 亚洲视频免费在线看| 亚洲日韩久久综合中文字幕| 免费人妻无码不卡中文字幕18禁| 三年片免费高清版| 亚洲成a人片在线观看中文!!!| 日韩免费福利视频| a级毛片免费播放| 久久久久精品国产亚洲AV无码| 免费在线视频一区| 久久青草精品38国产免费| 亚洲一卡2卡3卡4卡乱码 在线| 亚洲国产午夜福利在线播放| 2019中文字幕免费电影在线播放| 亚洲AV日韩AV一区二区三曲| 国产精品亚洲综合一区| 国产一精品一AV一免费孕妇|