Code Monkey home page Code Monkey logo

Comments (14)

rmitsch avatar rmitsch commented on May 20, 2024

Hi @rkatriel, the difference between your sample and the one from the article is the choice of the model. While we try to make our prompts work on as many models as possible, it's hard to guarantee cross-model compatibility. Especially smaller and older models (both of which applies to open_llama_3b) may not deliver satisfying results. A particular challenge here is that LLMs need to understand the output format, as we otherwise can't parse the result back.

I recommend using a newer/larger model.

from spacy.

rkatriel avatar rkatriel commented on May 20, 2024

Hi @rmitsch which model in particular do you recommend I try? It can't be bigger than a 7b otherwise it won't fit in memory (or take way too long to run).

from spacy.

rmitsch avatar rmitsch commented on May 20, 2024

Give Mistral a shot.

from spacy.

rkatriel avatar rkatriel commented on May 20, 2024

I tried Mistral with the config parameters from https://spacy.io/api/large-language-models:

        "model": {
            "@llm_models": "spacy.Mistral.v1",
            "name": "Mistral-7B-v0.1"
        },

But I'm getting KeyError: 'mistral'. Below is the traceback.

File "/Users/ron/PycharmProjects/AI/OpenAI/spacy-llm-example.py", line 5, in
nlp.add_pipe(
File "/opt/homebrew/lib/python3.11/site-packages/spacy/language.py", line 814, in add_pipe
pipe_component = self.create_pipe(
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/spacy/language.py", line 702, in create_pipe
resolved = registry.resolve(cfg, validate=validate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/confection/init.py", line 756, in resolve
resolved, _ = cls._make(
^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/confection/init.py", line 805, in _make
filled, _, resolved = cls._fill(
^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/confection/init.py", line 860, in _fill
filled[key], validation[v_key], final[key] = cls._fill(
^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/confection/init.py", line 877, in _fill
getter_result = getter(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/spacy_llm/models/hf/mistral.py", line 90, in mistral_hf
return Mistral(name=name, config_init=config_init, config_run=config_run)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/spacy_llm/models/hf/mistral.py", line 21, in init
super().init(name=name, config_init=config_init, config_run=config_run)
File "/opt/homebrew/lib/python3.11/site-packages/spacy_llm/models/hf/base.py", line 73, in init
self._model = self.init_model()
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/spacy_llm/models/hf/mistral.py", line 39, in init_model
model = transformers.AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 456, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 957, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 671, in getitem
raise KeyError(key)
KeyError: 'mistral'

Process finished with exit code 1

from spacy.

rmitsch avatar rmitsch commented on May 20, 2024

Which spacy-llm version are you using? I can't reproduce this locally.

from spacy.

rkatriel avatar rkatriel commented on May 20, 2024

The version of spacy-llm I was using was 0.6.3. I upgraded to the latest (0.6.4) but still got the same error.

It looks like the problem was actually with the transformers library. I was using an incompatible version (4.30.0). After upgrading to the latest (4.35.2) Mistral loaded cleanly.

But now I'm getting an error encountered earlier while trying to make transformers work with an mps device (#13096):

ValueError: The current 'device_map' had weights offloaded to the disk. Please provide an 'offload_folder' for them. Alternatively, make sure you have 'safetensors' installed if the model you are using offers the weights in this format.

The offload folder - where the model weights we will offloaded - is an optional parameter when initializing Mistral (https://huggingface.co/docs/transformers/main/model_doc/mistral):

checkpoint = 'mistralai/Mistral-7B-v0.1'
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map='auto', offload_folder=offload_folder)

Any ideas how to resolve this?

from spacy.

rmitsch avatar rmitsch commented on May 20, 2024

You can pass any parameter to a HF model by including it in your config_init:

[components.llm.model]
@llm_models = "spacy.Mistral.v1"
name = "Mistral-7B-v0.1"

[components.llm.model.config_init]
offload_folder = "..."

from spacy.

rkatriel avatar rkatriel commented on May 20, 2024

Thanks, Raphael. I modified the code accordingly

    config={
        "task": {
            "@llm_tasks": "spacy.NER.v1",
            "labels": "SAAS_PLATFORM,PROGRAMMING_LANGUAGE,OPEN_SOURCE_LIBRARY"
        },
        "model": {
            "@llm_models": "spacy.Mistral.v1",
            "name": "Mistral-7B-v0.1",
            "config_init": {
                "offload_folder": "."
            }
        },
    },

Now I'm getting a different error when transformers calls the Mac's accelerate package:

TypeError: BFloat16 is not supported on MPS

This is a known issue with Mistral (see https://docs.mistral.ai/quickstart/). The suggestion is to "pass the parameter --dtype half to the Docker command line."

I tried passing --dtype half to the python interpreter but it made no difference.

from spacy.

rmitsch avatar rmitsch commented on May 20, 2024

I tried passing --dtype half to the python interpreter but it made no difference.

Set torch_dtype = "half" in your config:

"model": {
    "@llm_models": "spacy.Mistral.v1",
    "name": "Mistral-7B-v0.1",
    "config_init": {
        "offload_folder": "."
        "torch_dtype": "half"
    }

Let me know whether that helps.

from spacy.

rkatriel avatar rkatriel commented on May 20, 2024

Thanks. That did the trick. Now the code runs without errors (albeit slowly, partly due to moderate memory pressure). However, once again no output is produced, perhaps related to the pad token warning. See the console output below.

/opt/homebrew/bin/python3.11 /Users/ron/PycharmProjects/AI/OpenAI/spacy-llm-example.py 
/opt/homebrew/lib/python3.11/site-packages/spacy_llm/models/hf/base.py:133: UserWarning: Couldn't find a CUDA GPU, so the setting 'device_map:auto' will be used, which may result in the LLM being loaded (partly) on the CPU or even the hard disk, which may be slow. Install cuda to be able to load and run the LLM on the GPU instead.
  warnings.warn(
Loading checkpoint shards: 100%|██████████| 2/2 [00:34<00:00, 17.34s/it]
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.

Process finished with exit code 0

from spacy.

rmitsch avatar rmitsch commented on May 20, 2024

Log the raw responses by setting save_io:

    config={
        "task": {
            "@llm_tasks": "spacy.NER.v1",
            "labels": "SAAS_PLATFORM,PROGRAMMING_LANGUAGE,OPEN_SOURCE_LIBRARY"
        },
        "model": {
            "@llm_models": "spacy.Mistral.v1",
            "name": "Mistral-7B-v0.1",
            "config_init": {
                "offload_folder": "."
                "dtype": "half,
            }
        },
        "save_io": True
    },

You can access the response in doc.user_data["llm_io"]["response"]. Let me know what the LLM response is.

from spacy.

rkatriel avatar rkatriel commented on May 20, 2024

I got back {}

from spacy.

rmitsch avatar rmitsch commented on May 20, 2024

When I run this, the response is "\n\nText:\n'''\nWe use the following open source libraries:\n\n* TensorFlow". I. e. Mistral kinda understands part of the task, but doesn't respond conforming to the output conventions specified in our NER prompt.

I recommend to use the latest version of the NER recipe (spacy.NER.v3, this way "PyTorch" is recognized as OS library) and set label_definitions in your config:

"task": {
    "@llm_tasks": "spacy.NER.v3",
    "labels": "SAAS_PLATFORM,PROGRAMMING_LANGUAGE,OPEN_SOURCE_LIBRARY",
    "label_definitions": {"SAAS_PLATFORM": ..., }
},

That will make it easier for the LLM. Unfortunately we can't guarantee that all (OS) models understand all prompts properly.

from spacy.

rkatriel avatar rkatriel commented on May 20, 2024

@rmitsch Thanks, I followed your advice but unfortunately it made no difference. I also tried replacing "spacy.Mistral.v1" with "spacy.OpenLLaMA.v1" and "spacy.StableLM.v1" but the response from the LLM is always empty ({}) so it seems the issue is not specific to a particular OS model. It would be great to have this simple example work for at least one HuggingFace model.

from spacy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.