<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>


        SiEBERT – English-Language Sentiment Classification


        Overview

        This model (“SiEBERT”, prefix for “Sentiment in English”) is a fine-tuned checkpoint of RoBERTa-large (Liu et al. 2019). It enables reliable binary sentiment analysis for various types of English-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. The model was fine-tuned and evaluated on 15 data sets from diverse text sources to enhance generalization across different types of texts (reviews, tweets, etc.). Consequently, it outperforms models trained on only one type of text (e.g., movie reviews from the popular SST-2 benchmark) when used on new data as shown below.


        Predictions on a data set

        If you want to predict sentiment for your own data, we provide an example script via Google Colab. You can load your data to a Google Drive and run the script for free on a Colab GPU. Set-up only takes a few minutes. We suggest that you manually label a subset of your data to evaluate performance for your use case. For performance benchmark values across various sentiment analysis contexts, please refer to our paper (Hartmann et al. 2022).

        siebert/sentiment-roberta-large-english


        Use in a Hugging Face pipeline

        The easiest way to use the model for single predictions is Hugging Face’s sentiment analysis pipeline, which only needs a couple lines of code as shown in the following example:
        from transformers import pipeline
        sentiment_analysis = pipeline("sentiment-analysis",model="siebert/sentiment-roberta-large-english")
        print(sentiment_analysis("I love this!"))

        siebert/sentiment-roberta-large-english


        Use for further fine-tuning

        The model can also be used as a starting point for further fine-tuning of RoBERTa on your specific data. Please refer to Hugging Face’s documentation for further details and example code.


        Performance

        To evaluate the performance of our general-purpose sentiment analysis model, we set aside an evaluation set from each data set, which was not used for training. On average, our model outperforms a DistilBERT-based model (which is solely fine-tuned on the popular SST-2 data set) by more than 15 percentage points (78.1 vs. 93.2 percent, see table below). As a robustness check, we evaluate the model in a leave-one-out manner (training on 14 data sets, evaluating on the one left out), which decreases model performance by only about 3 percentage points on average and underscores its generalizability. Model performance is given as evaluation set accuracy in percent.

        Dataset DistilBERT SST-2 This model
        McAuley and Leskovec (2013) (Reviews) 84.7 98.0
        McAuley and Leskovec (2013) (Review Titles) 65.5 87.0
        Yelp Academic Dataset 84.8 96.5
        Maas et al. (2011) 80.6 96.0
        Kaggle 87.2 96.0
        Pang and Lee (2005) 89.7 91.0
        Nakov et al. (2013) 70.1 88.5
        Shamma (2009) 76.0 87.0
        Blitzer et al. (2007) (Books) 83.0 92.5
        Blitzer et al. (2007) (DVDs) 84.5 92.5
        Blitzer et al. (2007) (Electronics) 74.5 95.0
        Blitzer et al. (2007) (Kitchen devices) 80.0 98.5
        Pang et al. (2002) 73.5 95.5
        Speriosu et al. (2011) 71.5 85.5
        Hartmann et al. (2019) 65.5 98.0
        Average 78.1 93.2

        數(shù)據(jù)統(tǒng)計(jì)

        數(shù)據(jù)評(píng)估

        siebert/sentiment-roberta-large-english瀏覽人數(shù)已經(jīng)達(dá)到560,如你需要查詢?cè)撜镜南嚓P(guān)權(quán)重信息,可以點(diǎn)擊"5118數(shù)據(jù)""愛(ài)站數(shù)據(jù)""Chinaz數(shù)據(jù)"進(jìn)入;以目前的網(wǎng)站數(shù)據(jù)參考,建議大家請(qǐng)以愛(ài)站數(shù)據(jù)為準(zhǔn),更多網(wǎng)站價(jià)值評(píng)估因素如:siebert/sentiment-roberta-large-english的訪問(wèn)速度、搜索引擎收錄以及索引量、用戶體驗(yàn)等;當(dāng)然要評(píng)估一個(gè)站的價(jià)值,最主要還是需要根據(jù)您自身的需求以及需要,一些確切的數(shù)據(jù)則需要找siebert/sentiment-roberta-large-english的站長(zhǎng)進(jìn)行洽談提供。如該站的IP、PV、跳出率等!

        關(guān)于siebert/sentiment-roberta-large-english特別聲明

        本站OpenI提供的siebert/sentiment-roberta-large-english都來(lái)源于網(wǎng)絡(luò),不保證外部鏈接的準(zhǔn)確性和完整性,同時(shí),對(duì)于該外部鏈接的指向,不由OpenI實(shí)際控制,在2023年 5月 26日 下午6:07收錄時(shí),該網(wǎng)頁(yè)上的內(nèi)容,都屬于合規(guī)合法,后期網(wǎng)頁(yè)的內(nèi)容如出現(xiàn)違規(guī),可以直接聯(lián)系網(wǎng)站管理員進(jìn)行刪除,OpenI不承擔(dān)任何責(zé)任。

        相關(guān)導(dǎo)航

        蟬鏡AI數(shù)字人

        暫無(wú)評(píng)論

        暫無(wú)評(píng)論...
        主站蜘蛛池模板: 亚洲欧洲免费无码| 亚洲色欲色欲www| 亚洲成人精品久久| 亚洲国产精品久久人人爱| 在线亚洲精品自拍| 亚洲AV无码久久| 国产亚洲sss在线播放| 国产成人精品免费视频大| 免费在线观看毛片| 亚洲综合亚洲国产尤物| 国产成人+综合亚洲+天堂| 久草免费手机视频| 免费国产真实迷j在线观看| 色欲色欲天天天www亚洲伊| 国产激情免费视频在线观看| 亚洲av无码不卡| 亚洲一区二区三区免费在线观看| 亚洲色欲或者高潮影院| 黄床大片30分钟免费看 | 日日摸日日碰夜夜爽亚洲| 岛国精品一区免费视频在线观看| 黄色网址免费大全| 中文亚洲AV片不卡在线观看| 日韩免费高清播放器| 四虎影库久免费视频| 亚洲国产成人久久精品app| 男人的好看免费观看在线视频| 亚洲AV一宅男色影视| 91精品视频免费| 亚洲aⅴ无码专区在线观看| 亚洲色欲久久久久综合网| 亚洲欧洲无码一区二区三区| 国产成人精品免费直播| 日本亚洲色大成网站www久久| 亚洲电影免费在线观看| 国产V亚洲V天堂无码久久久| 国产精品极品美女自在线观看免费| 日韩在线免费电影| 亚洲中文字幕无码爆乳app| 色婷婷7777免费视频在线观看| 亚洲日韩精品一区二区三区|