<span id="3dn8r"></span>
    1. <span id="3dn8r"><optgroup id="3dn8r"></optgroup></span><li id="3dn8r"><meter id="3dn8r"></meter></li>


        Vision Transformer (base-sized model)

        Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224×224. It was introduced in the paper An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale by Dosovitskiy et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
        Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.


        Model description

        The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224×224 pixels.
        Images are presented to the model as a sequence of fixed-size patches (resolution 16×16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
        Note that this model does not provide any fine-tuned heads, as these were zero’d by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
        By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.


        Intended uses & limitations

        You can use the raw model for image classification. See the model hub to look for
        fine-tuned versions on a task that interests you.


        How to use

        Here is how to use this model in PyTorch:
        from transformers import ViTImageProcessor, ViTModel
        from PIL import Image
        import requests
        url = 'https://res.www.futurefh.com/2023/05/20230526095402-647081ba87f76.jpg'
        image = Image.open(requests.get(url, stream=True).raw)
        processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224-in21k')
        model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
        inputs = processor(images=image, return_tensors="pt")
        outputs = model(**inputs)
        last_hidden_states = outputs.last_hidden_state

        Here is how to use this model in JAX/Flax:
        from transformers import ViTImageProcessor, FlaxViTModel
        from PIL import Image
        import requests
        url = 'https://res.www.futurefh.com/2023/05/20230526095402-647081ba87f76.jpg'
        image = Image.open(requests.get(url, stream=True).raw)
        processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224-in21k')
        model = FlaxViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
        inputs = processor(images=image, return_tensors="np")
        outputs = model(**inputs)
        last_hidden_states = outputs.last_hidden_state


        Training data

        The ViT model was pretrained on ImageNet-21k, a dataset consisting of 14 million images and 21k classes.


        Training procedure


        Preprocessing

        The exact details of preprocessing of images during training/validation can be found here.
        Images are resized/rescaled to the same resolution (224×224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).


        Pretraining

        The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.


        Evaluation results

        For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384×384). Of course, increasing the model size will result in better performance.


        BibTeX entry and citation info

        @misc{wu2020visual,
        title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
        author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
        year={2020},
        eprint={2006.03677},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
        }

        @inproceedings{deng2009imagenet,
        title={Imagenet: A large-scale hierarchical image database},
        author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
        booktitle={2009 IEEE conference on computer vision and pattern recognition},
        pages={248--255},
        year={2009},
        organization={Ieee}
        }

        數(shù)據(jù)統(tǒng)計

        數(shù)據(jù)評估

        google/vit-base-patch16-224-in21k瀏覽人數(shù)已經(jīng)達(dá)到942,如你需要查詢該站的相關(guān)權(quán)重信息,可以點擊"5118數(shù)據(jù)""愛站數(shù)據(jù)""Chinaz數(shù)據(jù)"進(jìn)入;以目前的網(wǎng)站數(shù)據(jù)參考,建議大家請以愛站數(shù)據(jù)為準(zhǔn),更多網(wǎng)站價值評估因素如:google/vit-base-patch16-224-in21k的訪問速度、搜索引擎收錄以及索引量、用戶體驗等;當(dāng)然要評估一個站的價值,最主要還是需要根據(jù)您自身的需求以及需要,一些確切的數(shù)據(jù)則需要找google/vit-base-patch16-224-in21k的站長進(jìn)行洽談提供。如該站的IP、PV、跳出率等!

        關(guān)于google/vit-base-patch16-224-in21k特別聲明

        本站OpenI提供的google/vit-base-patch16-224-in21k都來源于網(wǎng)絡(luò),不保證外部鏈接的準(zhǔn)確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 5月 26日 下午5:54收錄時,該網(wǎng)頁上的內(nèi)容,都屬于合規(guī)合法,后期網(wǎng)頁的內(nèi)容如出現(xiàn)違規(guī),可以直接聯(lián)系網(wǎng)站管理員進(jìn)行刪除,OpenI不承擔(dān)任何責(zé)任。

        相關(guān)導(dǎo)航

        Trae官網(wǎng)

        暫無評論

        暫無評論...
        主站蜘蛛池模板: 国产精品亚洲小说专区| 成人免费无码大片a毛片软件| 亚洲人成色在线观看| 久久精品亚洲综合专区| 亚洲精品无码久久久久AV麻豆| 一个人看的www在线观看免费| 日本特黄特色免费大片| 最近的中文字幕大全免费8| eeuss影院免费92242部| 99亚洲男女激情在线观看| 456亚洲人成在线播放网站| 亚洲精品在线免费看| 好吊妞788免费视频播放| 最近免费中文字幕mv电影| 青青操免费在线视频| 国产精品午夜免费观看网站| 色欲aⅴ亚洲情无码AV| 91在线亚洲综合在线| 亚洲免费黄色网址| 久久精品国产亚洲AV无码麻豆| 国产精品久久久亚洲| 黑人大战亚洲人精品一区| 亚洲精品国产精品乱码不卞| 免费看国产一级片| 国产精品嫩草影院免费| 免费看片A级毛片免费看| 好大好硬好爽免费视频| 好吊妞视频免费视频| 狼友av永久网站免费观看| 日韩一级免费视频| 国产免费av一区二区三区| 国产精品va无码免费麻豆| 免费人成在线观看播放国产| 免费大黄网站在线观看| 亚洲精品动漫人成3d在线| 精品国产亚洲男女在线线电影 | 久久久高清免费视频| 99久久国产热无码精品免费| 100000免费啪啪18免进| 在线jyzzjyzz免费视频| 国产乱子伦片免费观看中字|