--- license: cc-by-nc-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - stableLM - sharded - 8-bit - quantized inference: False --- # stablelm-tuned-alpha-7b-sharded-8bit This is a sharded checkpoint (with ~4GB shards) of the `stabilityai/stablelm-base-alpha-7b` model **in `8bit` precision** using `bitsandbytes`. Refer to the [original model](https://huggingface.co/stabilityai/stablelm-base-alpha-7b) for all details w.r.t. to the model. For more info on loading 8bit models, refer to the [example repo](https://huggingface.co/ybelkada/bloom-1b7-8bit) and/or the `4.28.0` [release info](https://github.com/huggingface/transformers/releases/tag/v4.28.0). - total model size is only ~7 GB! (Assuming model size reduction similar to the dolly-v2-12b model) - this enables low-RAM loading, i.e. Colab :) ## Basic Usage Install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`. ```bash pip install -U -q transformers bitsandbytes accelerate ``` Load the model. As it is serialized in 8bit you don't need to do anything special: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "ethzanalytics/stablelm-base-alpha-7b-sharded-8bit" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` You can also use this model as a drop-in replacement in the notebook for the standard sharded models: Open In Colab