Comments (3)
I've seen this commit and tried to do something like this:
import mii
client = mii.serve(
"mistralai/Mistral-7B-v0.1",
deployment_name="mistral-deployment",
enable_restful_api=True,
quantization_mode="wf6af16",
restful_api_port=28080,
)
But it doesn't work for me:
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/engine_factory.py", line 129, in build_hf_engine
return InferenceEngineV2(policy, engine_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/engine_v2.py", line 83, in __init__ self._model = self._policy.build_model(self._config, self._base_mp_group)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/model_implementations/inference_policy_base.py", line 156, in build_model
self.model = self.instantiate_model(engine_config, mp_group)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/model_implementations/mistral/policy.py", line 17, in instantiate_model
return MistralInferenceModel(config=self._model_config, engine_config=engine_config, base_mp_group=mp_group)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/model_implementations/inference_transformer_base.py", line 216, in __init__
self.make_qkv_layer()
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/model_implementations/inference_transformer_base.py", line 302, in make_qkv_layer
self.qkv = heuristics.instantiate_linear(linear_config, self._engine_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/modules/heuristics.py", line 98, in instantiate_linear
return DSLinearRegistry.instantiate_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/ds/lib/python3.11/site-packages/deepspeed/inference/v2/modules/module_registry.py", line 37, in instantiate_config
raise ValueError(f"Config {config_bundle.config} is not supported by {target_implementation}")
ValueError: Config max_tokens=768 in_channels=4096 out_channels=6144 activation=6 input_dtype=torch.bfloat16 output_dtype=torch.bfloat16 is not supported by <class 'deepspeed.inference.v2.modules.implementations.linear.quantized_linear.QuantizedWf6Af16Linear'>
Could you please help me with this problem
from deepspeed-mii.
This is due to the fact that we can only take FP16 input not BF16. What you can do is to convert the model to FP16 first with the following code:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "mistralai/Mixtral-8x7B-v0.1"
save_path = f"fp16/{model_id}"
model = AutoModelForCausalLM.from_pretrained(model_id)
model = model.to(torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
Then you pass the above save model to MII.
Note that FP6 is only supported for dense layer not sparse MOE. Since FP6 only apply to QKVO, not MLP part,
- You won't be able to fit the model in one GPU.
- There won't be two much acceleration compared to origin mixtral.
We are trying to develop the Sparse MoE GeMM for FP6, this will take some time.
from deepspeed-mii.
close for now, happy to reopen if needed :)
from deepspeed-mii.
Related Issues (20)
- Only running one replica even though setting many replicas
- [Problem]errno: 98 - Address already in use
- Performance with vllm
- error when using Qwen1.5-32B
- ValueError: Unsupported model type phi3
- BUG in run_batch_processing
- Cannot run Yi-34B-Chat => ValueError: Unsupported q_ratio: 7 HOT 2
- [REQUEST] Mixtral-8x22B support
- [REQUEST] LLAMA-3 support
- Does deepspeed-mii support prefix_allowed_tokens_fn?
- DeepSpeed-MII 能加载量化的int4或者int8的模型吗?
- Tf32 support
- How can I use the same prompt to produce the same text output as vllm
- Support LLava next stronger
- support Qwen
- support Qwen1.5
- support stream
- [BUG] MII Backend Hangs After 9999 Exceptions in `MIIAsyncPipeline.put_request` HOT 1
- few questions regarding the implementation of streaming and batching
- Configure server log level
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed-mii.