Code Monkey home page Code Monkey logo

Comments (15)

aarnphm avatar aarnphm commented on May 14, 2024

see #87 for fixes

from openllm.

aarnphm avatar aarnphm commented on May 14, 2024

Will release a patch soon

from openllm.

skywolf123 avatar skywolf123 commented on May 14, 2024

update to v0.1.19, but same error

openllm.exceptions.OpenLLMException: Model type <class 'transformers_modules.chatglm-6b-int4.configuration_chatglm.ChatGLMConfig'> is not supported yet.

from openllm.

aarnphm avatar aarnphm commented on May 14, 2024

Hey there, we discussed internally about more extensive custom path support, and want to share the decision:
With custom model path, it is best that when you do openllm start opt --model-id /path/to/custom-path, OpenLLM will first copy this to local BentoML store, and will serve from there. This is to decouple a lot of loading logics within OpenLLM for custom path and pretrained
openllm start under the hood does two things it it detects custom path:

  • openllm import opt /path/to/custom-path -> this will add the custom path to the BentoML store
  • openllm start run the server
    (Note that this is already the case with pretrained model)

openllm build will behave the same.

To ensure hermeticity, openllm import can provide optional --model-version to make sure we don’t copy the same path multiple times. If not passed, then we will generate the name based on the path (get the base name of the path) and the version would be a hash of the last modified time

from openllm.

foxxxx001 avatar foxxxx001 commented on May 14, 2024

“openllm import ”?I can't see this option.

from openllm.

aarnphm avatar aarnphm commented on May 14, 2024

WIP on #102

from openllm.

aarnphm avatar aarnphm commented on May 14, 2024

Please try out 0.1.20

from openllm.

skywolf123 avatar skywolf123 commented on May 14, 2024

Please try out 0.1.20

Traceback (most recent call last):
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 109, in from_str
return cls(name, version)
^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 63, in init
validate_tag_str(lversion)
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 40, in validate_tag_str
raise ValueError(
ValueError: \chatglm2-6b-int4 is not a valid BentoML tag: a tag's name or version must consist of alphanumeric characters, '_', '-', or '.', and must start and end with an alphanumeric character

from openllm.

aarnphm avatar aarnphm commented on May 14, 2024

can you send the full traceback here?

from openllm.

skywolf123 avatar skywolf123 commented on May 14, 2024

can you send the full traceback here?

openllm import chatglm D:\chatglm-6b-int4

Converting 'D' to lowercase: 'd'.
Traceback (most recent call last):
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 109, in from_str
return cls(name, version)
^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 63, in init
validate_tag_str(lversion)
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 40, in validate_tag_str
raise ValueError(
ValueError: \chatglm-6b-int4 is not a valid BentoML tag: a tag's name or version must consist of alphanumeric characters, '_', '-', or '.', and must start and end with an alphanumeric character

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "D:\Anaconda3\envs\llm_env\Scripts\openllm.exe_main
.py", line 7, in
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1130, in call
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 385, in wrapper
return func(*args, **attrs)
^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 358, in wrapper
return_value = func(*args, **attrs)
^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 333, in wrapper
return f(*args, **attrs)
^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\cli.py", line 1180, in download_models
).for_model(
^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm\models\auto\factory.py", line 129, in for_model
llm = model_class.from_pretrained(model_id, model_version=model_version, llm_config=llm_config, **attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\openllm_llm.py", line 648, in from_pretrained
_tag = bentoml.Tag.from_taglike(model_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 96, in from_taglike
return cls.from_str(taglike)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda3\envs\llm_env\Lib\site-packages\bentoml_internal\tag.py", line 111, in from_str
raise BentoMLException(f"Invalid {cls.name} {tag_str}")
bentoml.exceptions.BentoMLException: Invalid Tag D:\chatglm-6b-int4

from openllm.

lixiaoxiangzhi avatar lixiaoxiangzhi commented on May 14, 2024

I also encountered this issue.

from openllm.

Mercatoro avatar Mercatoro commented on May 14, 2024

Hey there, we discussed internally about more extensive custom path support, and want to share the decision: With custom model path, it is best that when you do openllm start opt --model-id /path/to/custom-path, OpenLLM will first copy this to local BentoML store, and will serve from there. This is to decouple a lot of loading logics within OpenLLM for custom path and pretrained openllm start under the hood does two things it it detects custom path:

  • openllm import opt /path/to/custom-path -> this will add the custom path to the BentoML store
  • openllm start run the server
    (Note that this is already the case with pretrained model)

openllm build will behave the same.

To ensure hermeticity, openllm import can provide optional --model-version to make sure we don’t copy the same path multiple times. If not passed, then we will generate the name based on the path (get the base name of the path) and the version would be a hash of the last modified time

Hey, loading the model from a local folder should also be possible from a docker container, correct?
I have the following as my last command in the Dockerfile: CMD ["openllm", "start" , "bigcode/starcoder", "--model-id", "/path/to/local/starcoder/model"]
Of course the path is mounted when running the container.

from openllm.

lixiaoxiangzhi avatar lixiaoxiangzhi commented on May 14, 2024

from openllm.

Mercatoro avatar Mercatoro commented on May 14, 2024

Hello again, @aarnphm your described way to use a local model instead of downloading it every time seems not to work at the moment. Here is my Dockerfile plus the run command. It's starting up fine w/o any errors but it is downloading the model completely from the internet anyways.
Do you need a new ticket / further information?

Dockerfile used to build image:

FROM python:3.10-slim
WORKDIR /code
ENV BENTOML_HOME="/root/srv/user/starcoder/"
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
RUN --mount=type=secret,id=huggingfacetoken \
    huggingface-cli login --token $(cat /run/secrets/huggingfacetoken)
EXPOSE 3000
COPY . .
CMD ["openllm", "start" , "bigcode/starcoder", "--workers-per-resource", "0.5", "--model-id", "/root/srv/user/starcoder/starcoder"]

requirements.txt

huggingface_hub[cli]
bentoml
psutil
wheel
vllm==0.2.2
torch
transformers
openllm

docker run command to start from image:
nvidia-docker run --mount type=bind,source=/srv/user/starcoder/starcoder,target=/srv/user/starcoder/models/vllm-bigcode--starcoder/<hash> --gpus all -d -p 3005:3000 <starcoder_image>

from openllm.

aarnphm avatar aarnphm commented on May 14, 2024

You only need to run CMD ["openllm", "start", "/mount/path", ...]

from openllm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.