This is a copy of the original BLOOM weights that is more efficient to use with the DeepSpeed-MII and DeepSpeed-Inference. In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.
For specific details about the BLOOM model itself, please see the original BLOOM model card.
For examples on using this repo please see the following:
- https://github.com/huggingface/transformers-bloom-inference
- https://github.com/microsoft/DeepSpeed-MII
數據評估
關于microsoft/bloom-deepspeed-inference-fp16特別聲明
本站OpenI提供的microsoft/bloom-deepspeed-inference-fp16都來源于網絡,不保證外部鏈接的準確性和完整性,同時,對于該外部鏈接的指向,不由OpenI實際控制,在2023年 6月 8日 下午9:56收錄時,該網頁上的內容,都屬于合規合法,后期網頁的內容如出現違規,可以直接聯系網站管理員進行刪除,OpenI不承擔任何責任。
相關導航
暫無評論...