Comments (3)
use legacy method still get OOM error, with mii_configs = {"tensor_parallel": 1, "dtype": "fp16"}
it seems the hf model had loaded successfully, but OOM in deepspeed when "replace_transformer_layer"
Traceback (most recent call last):
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/mii/legacy/launch/multi_gpu_server.py", line 97, in
main()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/mii/legacy/launch/multi_gpu_server.py", line 89, in main
inference_pipeline = load_models(args.model_config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/mii/legacy/models/load_models.py", line 72, in load_models
engine = deepspeed.init_inference(getattr(inference_pipeline,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/init.py", line 336, in init_inference
engine = InferenceEngine(model, config=ds_inference_config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 158, in init
self._apply_injection_policy(config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/inference/engine.py", line 418, in _apply_injection_policy
replace_transformer_layer(client_module, self.module, checkpoint, config, self.config)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 354, in replace_transformer_layer
replaced_module = replace_module(model=model,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 603, in replace_module
replaced_module, _ = _replace_module(model, policy, state_dict=sd)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 663, in _replace_module
_, layer_id = _replace_module(child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 663, in _replace_module
_, layer_id = _replace_module(child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 639, in _replace_module
replaced_module = policies[child.class][0](child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 310, in replace_fn
new_module = replace_with_policy(child,
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/replace_module.py", line 250, in replace_with_policy
_container.transpose()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/features/meta_tensor.py", line 48, in transpose
super().transpose()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/base.py", line 286, in transpose
self.transpose_mlp()
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/base.py", line 295, in transpose_mlp
self._h4h_w = self.transpose_impl(self.h4h_w.data)
File "/home/rdwang/anaconda3/envs/deepspeed_mii_py3.10/lib/python3.10/site-packages/deepspeed/module_inject/containers/base.py", line 300, in transpose_impl
data.reshape(-1).copy(data.transpose(-1, -2).contiguous().reshape(-1))
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB. GPU 0 has a total capacty of 23.69 GiB of which 116.62 MiB is free. Process 2657081 has 2.49 GiB memory in use. Including non-PyTorch memory, this process has 21.08 GiB memory in use. Of the allocated memory 20.56 GiB is allocated by PyTorch, and 231.30 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
from deepspeed-mii.
Hi @wangrendong-yition your 24GB of memory should be plenty to run the Llama-7B model. Could you share the GPU type, deepspeed/deepspeed-mii versions, and the script you are running? This will help me debug the OOM error you are seeing.
Thanks!
from deepspeed-mii.
Hi @wangrendong-yition your 24GB of memory should be plenty to run the Llama-7B model. Could you share the GPU type, deepspeed/deepspeed-mii versions, and the script you are running? This will help me debug the OOM error you are seeing.
Thanks!
I don't know what happened before, but today I give it another try and import mii; pipe = mii.pipeline("/data/Llama-2-7b-hf"); response = pipe(["DeepSpeed is", "Seattle is"], max_new_tokens=512); print(response)
works fine now.
The legacy method still got the OOM error on RTX3090, following codes:
import mii
mii_configs = {"tensor_parallel": 1, "dtype": "fp16"}
mii.deploy(task="text-generation",
model="NousResearch/Llama-2-7b-hf",
model_path="/data/deepspeed_mii_models",
deployment_name="llama2_deployment",
mii_config=mii_configs)
while I think this OOM doesn't matter now.
Anyway this issue could be closed. Thanks!
from deepspeed-mii.
Related Issues (20)
- RuntimeError: The server socket has failed to listen on any local network address. The server socket has failed to bind to [::]:29700 (errno: 98 - Address already in use). HOT 1
- `ValueError: channels must be divisible by 8` when new special tokens are added HOT 3
- import mii not working HOT 5
- How to eliminate deadlock problem? HOT 1
- support for mixtral family ? HOT 9
- Unable to run inference on free tier Colab. HOT 4
- Benchmark:Performance is lower than vllm HOT 1
- Support for repetition penalty during inference with sampling HOT 1
- ModuleNotFoundError: No module named 'mii' HOT 2
- result is empty
- RuntimeError: server crashed for some reason, unable to proceed HOT 2
- The inference result is inconsistent with hf HOT 1
- TypeError: expected Tensor as element 0 in argument 0, but got bool HOT 1
- How to generate multiple responses in one time? HOT 1
- Is the DeepSpeed-MII will support habana (HPU) hardware? HOT 2
- How does GPT2/Bert models utilize continuous batching feature in MII? HOT 1
- Use of dtype in the mii fastgen HOT 1
- Fp6 eta HOT 2
- How to set trust_remote_code=True in pipeline HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepspeed-mii.