Code Monkey home page Code Monkey logo

comfyui_vlm_nodes's People

Contributors

extraphy avatar gokayfem avatar tobbez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui_vlm_nodes's Issues

change of model download path?

Hi,
I don't know if true but download path keep changes for models. some time it downloads in custom node folder and sometime in cache folder under huggingface hub folder, and now we have folder for it in models folder. Any way to fix it to specific folder?

Cannot Import ComfyUI_VLM_nodes

Error during ComfyUI startup...

Traceback (most recent call last):
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1887, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\__init__.py", line 43, in <module>
    system_info = get_system_info()
                  ^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\install_init.py", line 52, in get_system_info
    system_info['avx2'] = 'avx2' in cpuinfo.get_cpu_info()['flags']
                                    ^^^^^^^^^^^^^^^^^^^^^^
  File "H:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\cpuinfo\cpuinfo.py", line 2762, in get_cpu_info
  File "json\__init__.py", line 359, in loads
  File "json\decoder.py", line 337, in decode
  File "json\decoder.py", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Cannot import H:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes module for custom nodes: Expecting value: line 1 column 1 (char 0)

Any suggestions?

audioldm2

Hi.
How do I save music to a file after the audioldm2 node?

ComfyUI_VLM_nodes is very slow to launch

On my configuration ComfyUI_VLM_nodes take almost 1.7 seconds to launch.
This is very long, this make it unpractible for many serverless usage. For comparaison here is the others node launch time on my configuration:

Import times for custom nodes:
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/canvas_tab
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI-Logic
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_Noise
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/masquerade-nodes-comfyui
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_experiments
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_essentials
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_FizzNodes
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/Derfuu_ComfyUI_ModdedNodes
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/comfyui-workspace-manager
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/comfyui-deploy
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI-LLaVA-Captioner
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/comfyui-reactor-node
   0.0 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/comfyui-art-venture
   0.1 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI-Manager
   0.1 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI-Impact-Pack
   0.1 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite
   0.3 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/clipseg.py
   0.4 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/comfyui_controlnet_aux
   0.5 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_InstantID
   1.7 seconds: /media/julien-blanchon/DNA1/ComfyUI/custom_nodes/ComfyUI_VLM_nodes

ComfyUI-LLaVA-Captioner take 0.0 sec

mac running MoonDream Node node reporting errors

Error occurred when executing MoonDream:

The expanded size of the tensor (745) must match the existing size (746) at non-singleton dimension 1. Target sizes: [1, 745]. Tensor sizes: [1, 746]

what is this error - io.UnsupportedOperation: fileno

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "X:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 515, in _save
io.UnsupportedOperation: fileno

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "X:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\llavaloader.py", line 88, in generate_text
pil_image.save(buffer, format="JPEG") # You can change the format if needed
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 2432, in save
# Open also for reading ("+"), because TIFF save_all
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "X:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\JpegImagePlugin.py", line 824, in _save
ImageFile._save(im, fp, [("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize)
File "X:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 519, in _save
File "X:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\ImageFile.py", line 528, in _encode_tile
im.encoderconfig = ()
^^^^^^^^^^^^
File "X:\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 437, in _getencoder
return _E(self.scale + other.scale, self.offset + other.offset)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: function takes at most 14 arguments (17 given)

python version 3.9 don't work anymore

Since two or three days i can't no longer use this addon.. unfortunatly..
it worked before but now ask me to install those packages that are not available in 3.9 ...

Installing collected packages: llama-cpp-python
Successfully installed llama-cpp-python-0.2.26+cu121
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Missing or outdated packages: llama-cpp-agent, mkdocs, mkdocs-material, mkdocstrings[python], docstring-parser
Installing/Updating missing packages...
ERROR: Ignored the following versions that require a different python version: 0.0.1 Requires-Python >=3.10; 0.0.10 Requires-Python >=3.10; 0.0.11 Requires-Python >=3.10; 0.0.12 Requires-Python >=3.10; 0.0.13 Requires-Python >=3.10; 0.0.14 Requires-Python >=3.10; 0.0.15 Requires-Python >=3.10; 0.0.16 Requires-Python >=3.10; 0.0.17 Requires-Python >=3.10; 0.0.2 Requires-Python >=3.10; 0.0.3 Requires-Python >=3.10; 0.0.4 Requires-Python >=3.10; 0.0.5 Requires-Python >=3.10; 0.0.6 Requires-Python >=3.10; 0.0.7 Requires-Python >=3.10; 0.0.8 Requires-Python >=3.10; 0.0.9 Requires-Python >=3.10
ERROR: Could not find a version that satisfies the requirement llama-cpp-agent (from versions: none)
ERROR: No matching distribution found for llama-cpp-agent
Traceback (most recent call last):
File "/notebooks/ComfyUI/nodes.py", line 1893, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/notebooks/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/init.py", line 32, in
check_requirements_installed(llama_cpp_agent_path)
File "/notebooks/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/init.py", line 22, in check_requirements_installed
subprocess.check_call([sys.executable, '-m', 'pip', 'install', *missing_packages])
File "/usr/lib/python3.9/subprocess.py", line 373, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-m', 'pip', 'install', 'llama-cpp-agent', 'mkdocs', 'mkdocs-material', 'mkdocstrings[python]', 'docstring-parser']' returned non-zero exit status 1.

Request: CPU mode

It is difficult with a GPU, so is it possible to operate it on a CPU even if it sacrifices speed?

The ‘Target directory for download’ stage is taking too long

For example, this is JoyTag, but every time I change the image, it takes 250 seconds to process, most of which is spent on ‘Target directory for download’ and ‘Fetching’.

got prompt
Target directory for download: D:\AI\ComfyUI_windows_portable\ComfyUI\models\LLavacheckpoints\files_for_joytagger
Fetching 6 files: 100%|██████████████████████████████████████████████████████████████████| 6/6 [01:21<00:00, 13.54s/it]
Model path: D:\AI\ComfyUI_windows_portable\ComfyUI\models\LLavacheckpoints\files_for_joytagger
Model path: D:\AI\ComfyUI_windows_portable\ComfyUI\models\LLavacheckpoints\files_for_joytagger
Prompt executed in 251.05 seconds

I don’t remember it taking this long before… I can’t think of any reason why.

vlm nodes wont import AttributeError: 'NoneType' object has no attribute 'replace'

Do the VLM nodes need a CUDA device? im on ROCM and im seeing this error:

[11:34 PM]
Traceback (most recent call last):
File "/home/sagar/ComfyUI/nodes.py", line 1893, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in callwith_frames_removed
File "/home/sagar/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/__init.py", line 29, in
system_info = get_system_info()
File "/home/sagar/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/install_init.py", line 48, in get_system_info
system_info['cuda_version'] = "cu" + torch.version.cuda.replace(".", "").strip()
AttributeError: 'NoneType' object has no attribute 'replace'

About the model storage

Do you store models like Joytag and Moondream in the same location as other LLMs? I'm curious because Joytag seems to download every time.

Issue with moondream2

I am getting an issue while running moondream2 on Windows with an RTX 4090 GPU

FETCH DATA from: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
#read_workflow_json_files_all C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-mixlab-nodes\app
FETCH DATA from: C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
ERROR:asyncio:Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "asyncio\events.py", line 80, in _run
File "asyncio\proactor_events.py", line 165, in _call_connection_lost
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
ERROR:asyncio:Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "asyncio\events.py", line 80, in _run
File "asyncio\proactor_events.py", line 165, in _call_connection_lost
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\moondream2.py", line 68, in moondream2_generate_predictions
response = self.predictor.generate_predictions(temp_path, text_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\moondream2.py", line 33, in generate_predictions
generated_text = self.model.answer_question(enc_image, question, self.tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\moondream.py", line 92, in answer_question
answer = self.generate(
^^^^^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\moondream.py", line 76, in generate
output_ids = self.text_model.generate(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\generation\utils.py", line 1544, in generate
return self.greedy_search(
^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\generation\utils.py", line 2404, in greedy_search
outputs = self(
^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\modeling_phi.py", line 709, in forward
hidden_states = self.transformer(
^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\modeling_phi.py", line 675, in forward
else func(*args)
^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\modeling_phi.py", line 541, in forward
attn_outputs = self.mixer(
^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\modeling_phi.py", line 514, in forward
attn_output_function(x, past_key_values, attention_mask)
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\modeling_phi.py", line 494, in _forward_cross_attn
return attn_func(
^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\modeling_phi.py", line 491, in
else lambda fn, *args, **kwargs: fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\amp\autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\amp\autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\a.elmoussawi.cache\huggingface\modules\transformers_modules\files_for_moondream2\modeling_phi.py", line 318, in forward
padding_mask.masked_fill
(key_padding_mask, 0.0)
RuntimeError: The expanded size of the tensor (748) must match the existing size (749) at non-singleton dimension 1. Target sizes: [1, 748]. Tensor sizes: [1, 749]

Failed to install - is not a valid wheel filename

I'm failing to install VLM_node with the following error

Installing llama-cpp-python...
ERROR: llama_cpp_python-0.2.55-manylinux_2_31_x86_64.whl is not a valid wheel filename.
[notice] A new release of pip is available: 23.3.2 -> 24.0
[notice] To update, run: pip install --upgrade pip
Traceback (most recent call last):
File "/comfyui/nodes.py", line 1899, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/comfyui/custom_nodes/comfyui_vlm_nodes/__init__.py", line 44, in <module>
install_llama(system_info)
File "/comfyui/custom_nodes/comfyui_vlm_nodes/install_init.py", line 93, in install_llama
install_package("llama-cpp-python", custom_command=custom_command)
File "/comfyui/custom_nodes/comfyui_vlm_nodes/install_init.py", line 73, in install_package
subprocess.check_call(command)
File "/usr/local/lib/python3.11/subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', 'https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.55/llama_cpp_python-0.2.55-manylinux_2_31_x86_64.whl']' returned non-zero exit status 1.
Cannot import /comfyui/custom_nodes/comfyui_vlm_nodes module for custom nodes: Command '['/usr/local/bin/python', '-m', 'pip', 'install', 'llama-cpp-python', '--no-cache-dir', 'https://github.com/abetlen/llama-cpp-python/releases/download/v0.2.55/llama_cpp_python-0.2.55-manylinux_2_31_x86_64.whl']' returned non-zero exit status 1.

I have cuda enable on the machine but as I'm building in a docker it might not detect it and get the manylinux_2_31_x86_64 plateform that doesn't exist on the llama_cpp_python repo

Consultation on use

I would like to ask how to use gguf to optimize the prompt of the Wensheng image. In the example, the prompt words are generated for the image description (there is an example using the API, but there is no example using the local model).
`Error occurred when executing LLMSampler:

exception: access violation reading 0x000001E66891B000

File "C:\comfyui\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\comfyui\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\comfyui\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\comfyui\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\suggest.py", line 291, in generate_text_advanced
response = llm.create_chat_completion(messages=[
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 1638, in create_chat_completion
return handler(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama_chat_format.py", line 2006, in call
llama.create_completion(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 1474, in create_completion
completion: Completion = next(completion_or_chunks) # type: ignore
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 1000, in _create_completion
for token in self.generate(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 684, in generate
token = self.sample(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp\llama.py", line 603, in sample
id = sampling_context.sample(ctx_main=self._ctx, logits_array=logits)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp_internals.py", line 754, in sample
ctx_main.sample_repetition_penalties(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\llama_cpp_internals.py", line 350, in sample_repetition_penalties
llama_cpp.llama_sample_repetition_penalties(`

ValueError: not enough values to unpack (expected 1, got 0)

Traceback (most recent call last):
File "C:\ComfyUI\nodes.py", line 1887, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\ComfyUI\custom_nodes\ComfyUI_VLM_nodes_init
.py", line 26, in
check_requirements_installed(requirements_path)
File "C:\ComfyUI\custom_nodes\ComfyUI_VLM_nodes_init_.py", line 11, in check_requirements_installed
requirements = [pkg_resources.Requirement.parse(line.strip()) for line in f if line.strip()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\ComfyUI_VLM_nodes_init_.py", line 11, in
requirements = [pkg_resources.Requirement.parse(line.strip()) for line in f if line.strip()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI.venv\Lib\site-packages\pkg_resources_init_.py", line 3190, in parse
(req,) = parse_requirements(s)
^^^^^^
ValueError: not enough values to unpack (expected 1, got 0)

Cannot import C:\ComfyUI\custom_nodes\ComfyUI_VLM_nodes module for custom nodes: not enough values to unpack (expected 1, got 0)

win11
python 3.11.7

how to fix this?

Is this not possible to run on GoogleColab comfyui?

No matter how many times I install it, it doesn't run.

TypeError: Failed to fetch

Custom Nodes for Vision Language Models (VLM) , Large Language Models (LLM), Image Captioning, Automatic Prompt Generation, Creative and Consistent Prompt Suggestion, Keyword Extraction

#read_workflow_json_files_all /content/drive/MyDrive/install_comfyui/custom_nodes/comfyui-mixlab-nodes/app/
data_path: /content/drive/MyDrive/install_comfyui/custom_nodes/comfyui-mixlab-nodes/data
got prompt
ERROR:aiohttp.server:Error handling request
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_app.py", line 543, in handle
resp = await handler(request)
File "/usr/local/lib/python3.10/dist-packages/aiohttp/web_middlewares.py", line 114, in impl
return await handler(request)
File "/content/drive/MyDrive/install_comfyui/server.py", line 47, in cache_control
response: web.Response = await handler(request)
File "/content/drive/MyDrive/install_comfyui/server.py", line 474, in post_prompt
valid = execution.validate_prompt(prompt)
File "/content/drive/MyDrive/install_comfyui/execution.py", line 620, in validate_prompt
class
= nodes.NODE_CLASS_MAPPINGS[prompt[x]['class_type']]
KeyError: 'class_type'

Mac: KeyError: 'flags' (dict returned by get_cpu_info does not contain 'flags')

hello, thanks for all your work on this project!

I'm running on a Mac M3, and get_cpu_info() dict does not contain 'flags' key.

>>> from cpuinfo import get_cpu_info
>>> get_cpu_info()
{'python_version': '3.11.7.final.0 (64 bit)', 'cpuinfo_version': [9, 0, 0], 'cpuinfo_version_string': '9.0.0', 'arch': 'ARM_8', 'bits': 64, 'count': 16, 'arch_string_raw': 'arm64', 'brand_raw': 'Apple M3 Max'}

This makes it work:

diff --git a/install_init.py b/install_init.py
index 889c151..024526c 100644
--- a/install_init.py
+++ b/install_init.py
@@ -49,7 +49,7 @@ def get_system_info():
     
     # Check for AVX2 support
     if importlib.util.find_spec('cpuinfo'):        
-        system_info['avx2'] = 'avx2' in cpuinfo.get_cpu_info()['flags']
+        system_info['avx2'] = 'avx2' in cpuinfo.get_cpu_info().get('flags',[])

ImportError: cannot import name 'SiglipVisionModel' from 'transformers'

I get this error while trying to run the MCLLAVA node after recent updates.

!!! Exception during processing !!!
Traceback (most recent call last):
File "F:\Tools\ComfyUI\execution.py", line 148, in recursive_execute
obj = class_def()
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\mcllava.py", line 56, in init
self.predictor = MCLLaVAModelPredictor()
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\mcllava.py", line 23, in init
self.model = AutoModelForCausalLM.from_pretrained(self.model_path, torch_dtype=torch.float16, trust_remote_code=True).to(self.device)
File "F:\Tools\ComfyUI\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 526, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "F:\Tools\ComfyUI\venv\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1091, in from_pretrained
config_class = get_class_from_dynamic_module(
File "F:\Tools\ComfyUI\venv\lib\site-packages\transformers\dynamic_module_utils.py", line 500, in get_class_from_dynamic_module
return get_class_in_module(class_name, final_module.replace(".py", ""))
File "F:\Tools\ComfyUI\venv\lib\site-packages\transformers\dynamic_module_utils.py", line 200, in get_class_in_module
module = importlib.import_module(module_path)
File "C:\Users\kunal\AppData\Local\Programs\Python\Python310\lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\kunal.cache\huggingface\modules\transformers_modules\files_for_mcllava\modeling_llava.py", line 11, in
from transformers import PreTrainedModel, SiglipVisionModel
ImportError: cannot import name 'SiglipVisionModel' from 'transformers' (F:\Tools\ComfyUI\venv\lib\site-packages\transformers_init
.py)

[Feature Request] Support GBNF grammar

I learned that llama-cpp has an option to specify a GBNF grammar format.

The ability to specify formats precisely in this way, rather than through prompts, is very appealing, especially since I’m using VLM/LLM for conditional branching and as a tagger.

I would appreciate it if this could be entered as an option for the LLaVA/LLM Sampler.

Thank you.

better to remove the install.bat file as it is auto in comfyUI

Hai.. it is better to skip the install bat file as it is easy to install same in comfy and it may interfere with other installs. can you suppress same as Comfy Manager will handle most of it.

Secondly can you specify the model paths and loading of the LLM models as it is not clear in this repo. how do we ensure if other LLM models needs to be tested using your nodes.
third using openAPI is not possible as they changed all the connecting info

Fails to load under a Linux ROCm environment

All packages from requirements.txt are installed and up to date.
Traceback (most recent call last):
  File "/sab_files/AI_Projects/ComfyUI/nodes.py", line 1887, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/sab_files/AI_Projects/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/__init__.py", line 29, in <module>
    system_info = get_system_info()
  File "/sab_files/AI_Projects/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/install_init.py", line 48, in get_system_info
    system_info['cuda_version'] = "cu" + torch.version.cuda.replace(".", "").strip()
AttributeError: 'NoneType' object has no attribute 'replace'

install roop

It seems that a specific LLM is downloaded every time ComfyUI is launched.

Python 3.9 incompatible

this the complete error Traceback (most recent call last):
File "/notebooks/ComfyUI/nodes.py", line 1893, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/notebooks/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/init.py", line 45, in
imported_module = importlib.import_module(f".nodes.{module_name}", name)
File "/usr/lib/python3.9/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1030, in _gcd_import
File "", line 1007, in _find_and_load
File "", line 986, in _find_and_load_unlocked
File "", line 680, in _load_unlocked
File "", line 850, in exec_module
File "", line 228, in _call_with_frames_removed
File "/notebooks/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/joytag.py", line 1, in
from .joytagger import Models
File "/notebooks/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/joytagger/Models.py", line 16, in
class VisionModel(nn.Module):
File "/notebooks/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/joytagger/Models.py", line 27, in VisionModel
def load_model(path: Path | str, device: str | None = None) -> 'VisionModel':
TypeError: unsupported operand type(s) for |: 'type' and 'type'

Cannot import /notebooks/ComfyUI/custom_nodes/ComfyUI_VLM_nodes module for custom nodes: unsupported operand type(s) for |: 'type' and 'type'

I maybe Guess that the union operator | with types type and type, is not supported in py 3.9 ...

Moondream RuntimeError: expanded size of tensor (749) must match the existing size (750) at non-singleton dimension 1.

Tried the Moondream node and it says this:

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-0246/utils.py", line 381, in new_func
    res_value = old_func(*final_args, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream_script.py", line 76, in answer_questions
    full_sentence = self.text_model.answer_question(image_embeds, question)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/text_model.py", line 79, in answer_question
    answer = self.generate(
             ^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/text_model.py", line 71, in generate
    output_ids = self.model.generate(
                 ^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/transformers/generation/utils.py", line 1544, in generate
    return self.greedy_search(
           ^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/transformers/generation/utils.py", line 2404, in greedy_search
    outputs = self(
              ^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/phi/modeling_phi.py", line 992, in forward
    hidden_states = self.transformer(
                    ^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/phi/modeling_phi.py", line 933, in forward
    hidden_states = layer(
                    ^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/phi/modeling_phi.py", line 734, in forward
    attn_outputs = self.mixer(
                   ^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/phi/modeling_phi.py", line 688, in forward
    attn_output = self._forward_cross_attn(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/phi/modeling_phi.py", line 664, in _forward_cross_attn
    return self.inner_cross_attn(
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_VLM_nodes/nodes/moondream/phi/modeling_phi.py", line 462, in forward
    padding_mask.masked_fill_(key_padding_mask, 0.0)
RuntimeError: The expanded size of the tensor (749) must match the existing size (750) at non-singleton dimension 1.  Target sizes: [1, 749].  Tensor sizes: [1, 750]

The reasoning process is too slow

Example Using Automatic Prompt Generation
Default settings
Prompt executed in 498.52 seconds
The graphics card uses 3090.
CPU usageIntel(R) Xeon(R) Platinum 8362 CPU

VLM nodes broken after ComfyUI update all.

Installed a few days ago, no problem. I updated all this morning, and this set of custom nodes broke entirely. Cannot find any of the llava loaders using search, so I attempted to repair the nodes using the manager. Didn't work. Deleted the folder entirely, and reinstalled using git clone. Everything is still broken. I believe this is the relevant section of the Command window, but I'm not sure.

If there is more information needed, let me know. Cheers.

All packages from requirements.txt are installed and up to date.
llama-cpp installed
Missing or outdated packages: llama-cpp-agent, mkdocs, mkdocs-material, mkdocstrings[python], docstring-parser
Installing/Updating missing packages...
Collecting llama-cpp-agent
  Using cached llama_cpp_agent-0.0.17-py3-none-any.whl.metadata (21 kB)
Collecting mkdocs
  Using cached mkdocs-1.5.3-py3-none-any.whl.metadata (6.2 kB)
Collecting mkdocs-material
  Using cached mkdocs_material-9.5.9-py3-none-any.whl.metadata (16 kB)
Collecting docstring-parser
  Using cached docstring_parser-0.15-py3-none-any.whl (36 kB)
Collecting mkdocstrings[python]
  Using cached mkdocstrings-0.24.0-py3-none-any.whl.metadata (7.7 kB)
Collecting llama-cpp-python>=0.2.26 (from llama-cpp-agent)
  Using cached llama_cpp_python-0.2.43.tar.gz (36.6 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
ERROR: Exception:
Traceback (most recent call last):
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper
    status = run_func(*args)
             ^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\cli\req_command.py", line 245, in wrapper
    return func(self, options, args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\commands\install.py", line 377, in run
    requirement_set = resolver.resolve(
                      ^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 95, in resolve
    result = self._result = resolver.resolve(
                            ^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve
    state = resolution.resolve(requirements, max_rounds=max_rounds)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 427, in resolve
    failure_causes = self._attempt_to_pin_criterion(name)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 239, in _attempt_to_pin_criterion
    criteria = self._get_updated_criteria(candidate)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 230, in _get_updated_criteria
    self._add_to_criteria(criteria, requirement, parent=candidate)
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria
    if not criterion.candidates:
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\resolvelib\structs.py", line 156, in __bool__
    return bool(self._sequence)
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 155, in __bool__
    return any(self)
           ^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 143, in <genexpr>
    return (c for c in iterator if id(c) not in self._incompatible_ids)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built
    candidate = func()
                ^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 211, in _make_candidate_from_link
    self._link_candidate_cache[link] = LinkCandidate(
                                       ^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 293, in __init__
    super().__init__(
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 156, in __init__
    self.dist = self._prepare()
                ^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 225, in _prepare
    dist = self._prepare_distribution()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 304, in _prepare_distribution
    return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\operations\prepare.py", line 525, in prepare_linked_requirement
    return self._prepare_linked_requirement(req, parallel_builds)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\operations\prepare.py", line 640, in _prepare_linked_requirement
    dist = _get_prepared_distribution(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\operations\prepare.py", line 71, in _get_prepared_distribution
    abstract_dist.prepare_distribution_metadata(
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\distributions\sdist.py", line 54, in prepare_distribution_metadata
    self._install_build_reqs(finder)
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\distributions\sdist.py", line 124, in _install_build_reqs
    build_reqs = self._get_build_requires_wheel()
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\distributions\sdist.py", line 101, in _get_build_requires_wheel
    return backend.get_requires_for_build_wheel()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_internal\utils\misc.py", line 751, in get_requires_for_build_wheel
    return super().get_requires_for_build_wheel(config_settings=cs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\pyproject_hooks\_impl.py", line 166, in get_requires_for_build_wheel
    return self._call_hook('get_requires_for_build_wheel', {
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\pyproject_hooks\_impl.py", line 321, in _call_hook
    raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
  File "C:\Users\Snow\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 77, in _build_backend
    obj = import_module(mod_path)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "importlib\__init__.py", line 126, in import_module
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'scikit_build_core'


[notice] A new release of pip is available: 23.3.1 -> 24.0
[notice] To update, run: C:\Users\Snow\ComfyUI_windows_portable\python_embeded\python.exe -m pip install --upgrade pip
Traceback (most recent call last):
  File "C:\Users\Snow\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1893, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\Users\Snow\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\__init__.py", line 32, in <module>    check_requirements_installed(llama_cpp_agent_path)
  File "C:\Users\Snow\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\__init__.py", line 22, in check_requirements_installed
    subprocess.check_call([sys.executable, '-m', 'pip', 'install', *missing_packages])
  File "subprocess.py", line 413, in check_call
subprocess.CalledProcessError: Command '['C:\\Users\\Snow\\ComfyUI_windows_portable\\python_embeded\\python.exe', '-m', 'pip', 'install', 'llama-cpp-agent', 'mkdocs', 'mkdocs-material', 'mkdocstrings[python]', 'docstring-parser']' returned non-zero exit status 2.

Cannot import C:\Users\Snow\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_VLM_nodes module for custom nodes: Command '['C:\\Users\\Snow\\ComfyUI_windows_portable\\python_embeded\\python.exe', '-m', 'pip', 'install', 'llama-cpp-agent', 'mkdocs', 'mkdocs-material', 'mkdocstrings[python]', 'docstring-parser']' returned non-zero exit status 2.

Llava 34b issues

It seems that the Llava 34b doesn't work with the current prompt forms. I'm not certain of this but it seems like what is going on here is some of my outputs.

Screenshot_2024-03-07_15-18-03

It looks like the 34b may take a slightly different prompting than the other models as listed here.

ggerganov/llama.cpp#5267

Using the simple loader it started to spit out Chinese. Kinda a bummer but maybe someone can guide me to a solution. If 34b is just a no go that's ok as well :)

modelpatcher error

Cannot import W:\visionsofchaos\Text To Image\ComfyUI\ComfyUI\custom_nodes\ComfyUI_VLM_nodes module for custom nodes: cannot import name 'ToImage' from 'torchvision.transforms.v2' (W:\visionsofchaos\Text To Image\ComfyUI\ComfyUI.venv\lib\site-packages\torchvision\transforms\v2_init_.py)
Traceback (most recent call last):
File "W:\visionsofchaos\Text To Image\ComfyUI\ComfyUI\nodes.py", line 1800, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "W:\visionsofchaos\Text To Image\ComfyUI\ComfyUI\custom_nodes\efficiency-nodes-comfyui_init
.py", line 1, in
from .efficiency_nodes import NODE_CLASS_MAPPINGS
File "W:\visionsofchaos\Text To Image\ComfyUI\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 4, in
from comfy.sd import ModelPatcher, CLIP, VAE
ImportError: cannot import name 'ModelPatcher' from 'comfy.sd' (W:\visionsofchaos\Text To Image\ComfyUI\ComfyUI\comfy\sd.py)

Solved: After the apdate got an error with LLaVA

Error occurred when executing LlavaClipLoader:

[WinError 2] The system cannot find the file specified: 'W:\ComfyUI_4ALL\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation\nvrtc_dlls\bin'

File "W:\ComfyUI_4ALL\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "W:\ComfyUI_4ALL\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "W:\ComfyUI_4ALL\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "W:\ComfyUI_4ALL\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\llavaloader.py", line 57, in load_clip_checkpoint
clip = Llava15ChatHandler(clip_model_path = clip_path, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\derec\miniconda3\envs\comfyui\Lib\site-packages\llama_cpp\llama_chat_format.py", line 1064, in init
import llama_cpp.llava_cpp as llava_cpp
File "C:\Users\derec\miniconda3\envs\comfyui\Lib\site-packages\llama_cpp\llava_cpp.py", line 83, in
_libllava = _load_shared_library(_libllava_base_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\derec\miniconda3\envs\comfyui\Lib\site-packages\llama_cpp\llava_cpp.py", line 62, in _load_shared_library
os.add_dll_directory(os.path.join(os.environ["CUDA_PATH"], "bin"))
File "", line 1119, in add_dll_directory
llava

No seed option in LLM and LLAVA

Thank you for your important project.

I am trying to synchronize two LLM's with the help of a common seed, but unfortunately the seed option does not exist for some reason. Can you add this option?

no seed option

help resolve this issue please

"D:\000AI\00ComfyUI\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\audioldm2.py", line 1, in from diffusers import AudioLDM2Pipeline ImportError: cannot import name 'AudioLDM2Pipeline' from 'diffusers' (D:\000AI\00ComfyUI\python_embeded\Lib\site-packages\diffusers_init_.py)

Can't download the files about node "audioldm2"

I downloaded the relevant files manually on hugging face, how should I place them?

屏幕截图 2024-03-03 233755
It doesn't work that way, and shows errors:

An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again.

DEFAULT_ETAG_TIMEOUT

cannot import name 'DEFAULT_ETAG_TIMEOUT' from 'huggingface_hub.constants' (/usr/local/lib/python3.9/dist-packages/huggingface_hub/constants.py)

RuntimeError: Unknown model (vit_so400m_patch14_siglip_384)

When I try running either of the moondream nodes I get this traceback error

!!! Exception during processing !!!
Traceback (most recent call last):
File "F:\Tools\ComfyUI\execution.py", line 148, in recursive_execute
obj = class_def()
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\moondream2.py", line 39, in init
self.predictor = Moondream2Predictor()
File "F:\Tools\ComfyUI\custom_nodes\ComfyUI_VLM_nodes\nodes\moondream2.py", line 24, in init
self.model = AutoModelForCausalLM.from_pretrained(self.model_path, trust_remote_code=True).to(self.device).eval()
File "F:\Tools\ComfyUI\venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "F:\Tools\ComfyUI\venv\lib\site-packages\transformers\modeling_utils.py", line 3462, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "C:\Users\kunal.cache\huggingface\modules\transformers_modules\files_for_moondream2\moondream.py", line 15, in init
self.vision_encoder = VisionEncoder()
File "C:\Users\kunal.cache\huggingface\modules\transformers_modules\files_for_moondream2\vision_encoder.py", line 98, in init
VisualHolder(timm.create_model("vit_so400m_patch14_siglip_384"))
File "F:\Tools\ComfyUI\venv\lib\site-packages\timm\models\factory.py", line 67, in create_model
raise RuntimeError('Unknown model (%s)' % model_name)
RuntimeError: Unknown model (vit_so400m_patch14_siglip_384)

I've located and downloaded the missing vit_so400m_patch14_siglip_384.safetensors file and tried putting it in the moondream2 folder, the clip_vision folder and the CLIP

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.