Code Monkey home page Code Monkey logo

examples's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

examples's Issues

TypeError: cannot pickle '_thread.lock' object

I get the following error when running the Embedding Pipeline in the 1_build_image_search_engine.ipynb notebook

TypeError: cannot pickle '_thread.lock' object


TypeError Traceback (most recent call last)
Cell In[3], line 18
11 yield item
13 # Embedding pipeline
14 p_embed = (
15 pipe.input('src')
16 .flat_map('src', 'img_path', load_image)
17 .map('img_path', 'img', ops.image_decode())
---> 18 .map('img', 'vec', ops.image_embedding.timm(model_name=MODEL, device=DEVICE))
19 )

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/towhee/runtime/pipeline.py:143, in Pipeline.map(self, input_schema, output_schema, fn, config)
141 uid = uuid.uuid4().hex
142 fn_action = self._to_action(fn)
--> 143 dag_dict = deepcopy(self._dag)
144 dag_dict[uid] = {
145 'inputs': input_schema,
146 'outputs': output_schema,
(...)
153 'next_nodes': [],
154 }
155 dag_dict[self._clo_node]['next_nodes'].append(uid)

File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil)
...
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "reduce", None)

TypeError: cannot pickle '_thread.lock' object

Deep Dive Reverse Video Search raise an error

import time
import towhee


# Please note the first time run will take time to download model and other files.

start = time.time()

collection = create_milvus_collection('x3d_m', 2048)

dc = (
    towhee.read_csv('reverse_video_search.csv')
      .runas_op['id', 'id'](func=lambda x: int(x))
      .video_decode.ffmpeg['path', 'frames'](sample_type='uniform_temporal_subsample', args={'num_samples': 16})
      .action_classification['frames', ('labels', 'scores', 'features')].pytorchvideo(
          model_name='x3d_m', skip_preprocess=True)
      .to_milvus['id', 'features'](collection=collection, batch=10)
)

end = time.time()

print('Total insert time: %.2fs'%(end-start))
print('Total number of inserted data is {}.'.format(collection.num_entities))

截屏2022-06-13 22 41 53

AttributeError: 'NoneType' object has no attribute 'loader'

I try to run the "video/deepfake_detection/1_deepfake_detection.ipynb"with the following code:

from towhee import ops, pipe, DataCollection

p = (
    pipe.input('path')
    .map('path', 'scores', ops.towhee.deepfake())
    .output('scores')
    # .output('path')
)

DataCollection(p('./train/deepfake_video/aejgdsuoqg.mp4')).show()

,but an error happend:
the filepath is correct,is there anything wrong with the code?

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb 单元格 1 in 4
      [1](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=0) from towhee import ops, pipe, DataCollection
      [3](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2) p = (
----> [4](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=3)     pipe.input('path')
      [5](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4)     .map('path', 'scores', ops.towhee.deepfake())
      [6](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=5)     .output('scores')
      [7](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=6)     # .output('path')
      [8](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=7) )
     [10](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/image/reverse_image_search/2_deep_dive_image_search.ipynb#W0sdnNjb2RlLXJlbW90ZQ%3D%3D?line=9) DataCollection(p('[./train/deepfake_video/aejgdsuoqg.mp4](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2253485445435f646576656c6f70227d.vscode-resource.vscode-cdn.net/home/image_check/code/examples/image/reverse_image_search/train/deepfake_video/aejgdsuoqg.mp4)')).show()

File [~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/pipeline.py:116](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2253485445435f646576656c6f70227d.vscode-resource.vscode-cdn.net/home/image_check/code/examples/image/reverse_image_search/~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/pipeline.py:116), in Pipeline.output(self, *output_schema, **config_kws)
    113 dag_dict[self._clo_node]['next_nodes'].append(uid)
    115 run_pipe = RuntimePipeline(dag_dict, config=config_kws)
--> 116 run_pipe.preload()
    117 return run_pipe

File [~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/runtime_pipeline.py:140](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2253485445435f646576656c6f70227d.vscode-resource.vscode-cdn.net/home/image_check/code/examples/image/reverse_image_search/~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/runtime_pipeline.py:140), in RuntimePipeline.preload(self)
    136 def preload(self):
    137     """
    138     Preload the operators.
    139     """
--> 140     return _Graph(self._dag_repr.nodes, self._dag_repr.edges, self._operator_pool, self._thread_pool)
...
  File "/home/image_check/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/operator_manager/operator_loader.py", line 59, in _load_legacy_op
    module = importlib.util.module_from_spec(spec)
  File "", line 553, in module_from_spec
AttributeError: 'NoneType' object has no attribute 'loader'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?9cfee1bc-701f-405f-9006-130d95f68b21) or open in a [text editor](command:workbench.action.openLargeOutput?9cfee1bc-701f-405f-9006-130d95f68b21). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...

模型使用本地模型吗?

image_embedding.timm(model_name='resnet50')在某些环境下无法连接,我想是否能设置加载本地模型如
image_embedding.timm(path='modelpath',model_name='resnet50')

save and load model

How to save and load the model in towhee because sometimes we already have the model and want to use that one?

Model not loading in 1_build_question_answering_engine.ipynb

I am trying to replicate/run this (under 1_build_question_answering_engine.ipynb cell 5):

insert_pipe = (
    pipe.input('id', 'question', 'answer')
        .map('question', 'vec', ops.text_embedding.dpr(model_name='facebook/dpr-ctx_encoder-single-nq-base'))
        .map('vec', 'vec', lambda x: x / np.linalg.norm(x, axis=0))
        .map(('id', 'vec'), 'insert_status', ops.ann_insert.milvus_client(host='127.0.0.1', port='19530', collection_name='question_answer'))
        .output()
)

But i keep getting the following error: 2

023-08-02 01:38:37,706 - 8171578624 - connectionpool.py-connectionpool:546 - DEBUG: https://huggingface.co:443 "HEAD /facebook/dpr-ctx_encoder-single-nq-base/resolve/main/tokenizer_config.json HTTP/1.1" 200 0
2023-08-02 01:38:37,747 - 8171578624 - dpr.py-dpr:39 - ERROR: Fail to load model by name: facebook/dpr-ctx_encoder-single-nq-base
2023-08-02 01:38:37,749 - 8171578624 - node.py-node:142 - INFO: text-embedding/dpr-0 ends with status: NodeStatus.FAILED

RuntimeError Traceback (most recent call last)
Cell In[124], line 6
1 insert_pipe = (
2 pipe.input('id', 'question', 'answer')
3 .map('question', 'vec', ops.text_embedding.dpr(model_name='facebook/dpr-ctx_encoder-single-nq-base'))
4 .map('vec', 'vec', lambda x: x / np.linalg.norm(x, axis=0))
5 .map(('id', 'vec'), 'insert_status', ops.ann_insert.milvus_client(host='127.0.0.1', port='19530', collection_name='question_answer'))
----> 6 .output()
7 )

File /opt/homebrew/lib/python3.11/site-packages/towhee/runtime/pipeline.py:101, in Pipeline.output(self, *output_schema)
98 dag_dict[self._clo_node]['next_nodes'].append(uid)
100 run_pipe = RuntimePipeline(dag_dict)
--> 101 run_pipe.preload()
102 return run_pipe

File /opt/homebrew/lib/python3.11/site-packages/towhee/runtime/runtime_pipeline.py:153, in RuntimePipeline.preload(self)
149 def preload(self):
150 """
151 Preload the operators.
152 """
--> 153 return _Graph(self._dag_repr.nodes, self._dag_repr.edges, self._operator_pool, self._thread_pool, TimeProfiler(False))

File /opt/homebrew/lib/python3.11/site-packages/towhee/runtime/runtime_pipeline.py:67, in _Graph.init(self, nodes, edges, operator_pool, thread_pool, time_profiler, trace_edges)
65 self.features = None
66 self._time_profiler.record(Event.pipe_name, Event.pipe_in)
---> 67 self._initialize()
68 self._input_queue = self._data_queues[0]

File /opt/homebrew/lib/python3.11/site-packages/towhee/runtime/runtime_pipeline.py:83, in _Graph._initialize(self)
81 node = create_node(self._nodes[name], self._operator_pool, in_queues, out_queues, self._time_profiler)
82 if not node.initialize():
---> 83 raise RuntimeError(node.err_msg)
84 self._node_runners.append(node)

RuntimeError: Node-text-embedding/dpr-0 runs failed, error msg: Create text-embedding/dpr-0 operator text-embedding/dpr:main with args None and kws {'model_name': 'facebook/dpr-ctx_encoder-single-nq-base'} failed, err:
DPRContextEncoder requires the PyTorch library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
Please note that you may need to restart your runtime after installation.
, Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/towhee/runtime/nodes/node.py", line 88, in initialize
self._op = self._op_pool.acquire_op(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/towhee/runtime/operator_manager/operator_pool.py", line 106, in acquire_op
op = self._op_loader.load_operator(hub_op_id, op_args, op_kws, tag, latest)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/towhee/runtime/operator_manager/operator_loader.py", line 154, in load_operator
op = factory(function, arg, kws, tag, latest)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/towhee/runtime/operator_manager/operator_loader.py", line 137, in _load_operator_from_hub
return self._load_operator_from_path(path, function, arg, kws, tag)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/towhee/runtime/operator_manager/operator_loader.py", line 125, in _load_operator_from_path
return self._instance_operator(op, arg, kws) if op is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/towhee/runtime/operator_manager/operator_loader.py", line 163, in _instance_operator
return op(*arg, **kws) if kws is not None else op(*arg)
^^^^^^^^^^^^^^^
File "/Users/jess/.towhee/operators/text-embedding/dpr/versions/main/init.py", line 19, in dpr
return Dpr(**kwargs)
^^^^^^^^^^^^^
File "/Users/jess/.towhee/operators/text-embedding/dpr/versions/main/dpr.py", line 40, in init
raise e
File "/Users/jess/.towhee/operators/text-embedding/dpr/versions/main/dpr.py", line 37, in init
self.model = DPRContextEncoder.from_pretrained(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1039, in getattribute
requires_backends(cls, cls._backends)
File "/opt/homebrew/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1027, in requires_backends
raise ImportError("".join(failed))
ImportError:
DPRContextEncoder requires the PyTorch library but it was not found in your environment. Checkout the instructions on the
installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment.
Please note that you may need to restart your runtime after installation.

ValidationError: 1 validation error for TupleForm while running reverse video search at Evaluation Section

I am using
towhee==1.1.1
Pydantic. == 1.10.11


ValidationError Traceback (most recent call last)
Cell In[10], line 31
19 aps.append(sum(precisions) / len(precisions))
21 return sum(aps) / len(aps)
23 eval_pipe = (
24 pipe.input('path')
25 .flat_map('path', 'path', lambda x: glob.glob(x))
26 .map('path', 'frames', ops.video_decode.ffmpeg(sample_type='uniform_temporal_subsample', args={'num_samples': 16}))
27 .map('frames', ('labels', 'scores', 'features'), ops.action_classification.pytorchvideo(model_name='x3d_m', skip_preprocess=True))
28 .map('features', 'result', ops.ann_search.milvus_client(host='127.0.0.1', port='19530', collection_name='x3d_m', limit=10))
29 .map('result', 'predict', lambda x: [i[0] for i in x])
30 .map('path', 'ground_truth', ground_truth)
---> 31 .window_all(('ground_truth', 'predict'), 'mHR', mean_hit_ratio)
32 .window_all(('ground_truth', 'predict'), 'mAP', mean_average_precision)
33 .output('mHR', 'mAP')
34 )
36 res = DataCollection(eval_pipe('./test//.mp4'))
37 res.show()

File ~/Documents/milvus/.milvusenv/lib/python3.11/site-packages/towhee/runtime/pipeline.py:348, in Pipeline.window_all(self, input_schema, output_schema, fn, config)
326 def window_all(self, input_schema, output_schema, fn, config=None) -> 'Pipeline':
327 """
328 Read all rows as single window and perform action.
329
(...)
346 [10, 14]
347 """
--> 348 output_schema = self._check_schema(output_schema)
349 input_schema = self._check_schema(input_schema)
351 uid = uuid.uuid4().hex

File ~/Documents/milvus/.milvusenv/lib/python3.11/site-packages/towhee/runtime/pipeline.py:517, in Pipeline._check_schema(schema)
515 @staticmethod
516 def _check_schema(schema):
--> 517 return TupleForm(schema_data=schema).schema_data

File ~/Documents/milvus/.milvusenv/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.init()

ValidationError: 1 validation error for TupleForm
schema_data -> 0
string does not match regex "^[a-z][a-z0-9_]$" (type=value_error.str.regex; pattern=^[a-z][a-z0-9_]$)

在尝试执行示例代码时收到TypeError错误

你好,我正在学习并尝试跟着这个示例搭建一个以图搜图的项目,我遇到了TypeError的错误。
具体是在操作到Embedding pipeline这一步时(加载图片并看到金鱼),收到了TypeError 的错误,不知道问题出在哪里,希望获得帮助,谢谢!
QQ截图20230322184649

How to reload the trained model weights of resnet50

The following code snippet is an example provided of using the pre-trained ResNet50 model ('resnet50') to generate an image embedding:
p = ( pipe.input('path') .map('path', 'img', ops.image_decode()) .map('img', 'vec', ops.image_embedding.timm(model_name='resnet50')) .output('img', 'vec') )

But i want to use trained weights of resnet50(fine-tune) to extract embedding, How to reload the trained model weights of resnet50?
Could you please provide an example.
thanks!

Loading Local Models and Downloading Online Models

Hello. When I use ''ops.image_embedding.timm(model_name='resnet50', checkpoint_path=my_checkpoint_path)'' , even if I specify the ''checkpoint_path='' to a specific model path, it still checks if the pre-trained model exists in the [ .cache] directory. If it is not present, the function automatically downloads the pre-trained model. In this case, is it actually using the model I specified in 'checkpoint_path=' ? Thanks for you!

Gradio Showcase报错

Image Search Engine案例,最后一步Gradio报错如下:

2022-08-26 23:06:57,117 - 12779106304 - decorators.py-decorators:95 - ERROR: RPC error: [search], <MilvusException: (code=1, message=checkIfLoaded failed when search, collection:reverse_image_search_norm, partitions:[], err = showPartitions failed, collection = reverse_image_search_norm, partitionIDs = [], reason = collection 435558044005564417 has not been loaded into QueryNode)>, <Time:{'RPC start': '2022-08-26 23:06:57.105122', 'RPC error': '2022-08-26 23:06:57.117755'}>
Traceback (most recent call last):
File "/Users/borye/miniforge3/lib/python3.9/site-packages/gradio/routes.py", line 247, in run_predict
output = await app.blocks.process_api(
File "/Users/borye/miniforge3/lib/python3.9/site-packages/gradio/blocks.py", line 645, in process_api
output = self.postprocess_data(fn_index, predictions, state)
File "/Users/borye/miniforge3/lib/python3.9/site-packages/gradio/blocks.py", line 594, in postprocess_data
prediction_value = predictions[i]
TypeError: '_Reason' object is not subscriptable

Pymilvus: The data fields number is not match with schema.

RuntimeError Traceback (most recent call last)
Cell In[33], line 6
4 next(reader)
5 for row in reader:
----> 6 insert_pipe(*row)

File D:\anaconda\envs\DL\lib\site-packages\towhee\runtime\runtime_pipeline.py:149, in RuntimePipeline.call(self, *inputs)
147 if self._enable_trace:
148 self._time_profiler_list.append(graph.time_profiler)
--> 149 return graph(inputs)

File D:\anaconda\envs\DL\lib\site-packages\towhee\runtime\runtime_pipeline.py:104, in _Graph.call(self, inputs)
102 def call(self, inputs: Union[Tuple, List]):
103 f = self.async_call(inputs)
--> 104 return f.result()

File D:\anaconda\envs\DL\lib\site-packages\towhee\runtime\runtime_pipeline.py:34, in _GraphResult.result(self)
33 def result(self):
---> 34 ret = self._graph.result()
35 del self._graph
36 return ret

File D:\anaconda\envs\DL\lib\site-packages\towhee\runtime\runtime_pipeline.py:87, in _Graph.result(self)
85 errs += node.err_msg + '\n'
86 if errs:
---> 87 raise RuntimeError(errs)
88 end_edge_num = self._nodes['_output'].out_edges[0]
89 res = self._data_queues[end_edge_num]

RuntimeError: Node-ann-insert/milvus-client-2 runs failed, error msg: <DataTypeNotMatchException: (code=0, message=The data fields number is not match with schema.)>, Traceback (most recent call last):
File "D:\anaconda\envs\DL\lib\site-packages\towhee\runtime\nodes\node.py", line 156, in _call
return True, self._op(*inputs), None
File "C:\Users\Administrator.towhee\operators\ann-insert\milvus-client\versions\main\milvus_client.py", line 45, in call
mr = self._collection.insert(row)
File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\pymilvus\orm\collection.py", line 535, in insert
if not self._check_insert_data_schema(data):
File "C:\Users\Administrator\AppData\Roaming\Python\Python39\site-packages\pymilvus\orm\collection.py", line 187, in _check_insert_data_schema
raise DataTypeNotMatchException(0, ExceptionsMessage.FieldsNumInconsistent)
pymilvus.orm.exceptions.DataTypeNotMatchException: <DataTypeNotMatchException: (code=0, message=The data fields number is not match with schema.)> pymilvus不支持VARCHAR,然后改成INT64,在导入csv文件时,出现这样的错误

dpr embedding error: TypeError: can't pickle _thread.lock objects

run demo example abnormal.
`from towhee import pipe, ops, DataCollection

if name == 'main':
p = (
pipe.input('text')
.map('text', 'vec', ops.text_embedding.dpr(model_name='facebook/dpr-ctx_encoder-single-nq-base'))
.output('text', 'vec')
)

DataCollection(p('Hello, world.')).show()`

Traceback (most recent call last):
File "F:/Project/towhee_test/main.py", line 7, in
.output('text', 'vec')
File "E:\Anaconda3\envs\towhee_env\lib\site-packages\towhee\runtime\pipeline.py", line 103, in output
dag_dict = deepcopy(self._dag)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 241, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 241, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 241, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 180, in deepcopy
y = _reconstruct(x, memo, *rv)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 281, in _reconstruct
state = deepcopy(state, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 241, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "E:\Anaconda3\envs\towhee_env\lib\copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle _thread.lock objects

Process finished with exit code 1

[Text Image Search] RuntimeError: Node-ann-insert/milvus-client-4 runs failed

Hi there,
When I clone and tried to run the project "text_image_search"(branch is main), an error occurred during the step "Load Image Embeddings into Milvus". The error is shown in the screenshot below:
image
p.s. I am using milvus 2.0.2, and pymilvus is 2.0.2. Also using towhee 0.9.0.
(please forgive my poor english :)

Audio Fingerprint example, indexing takes a very long time

I am building the Audio Image Fingerprint example and working through the file audio_fingerprint_beginner.ipynb. In that file, the code creates a Milvus collection. It appears that it is creating an empty collection (I cannot see any reference to the preloaded audio data in the call to construct the Collection).

It then goes on to create an index on this collection, as shown in the code fragment below. However, the call to collection.create_index(...) takes forever (days) on my Macbook M1. Is this expected? Am I missing something obvious (I am new to Milvus and Towhee).

I am also including a snippet of the log output of the Milvus docker container, maybe this can help . . . .

Thanks very much for any help that you can provide!

# Create Milvus collection
fields = [
    FieldSchema(name='id', dtype=DataType.INT64, description='embedding ids', is_primary=True, auto_id=True),
    FieldSchema(name='embedding', dtype=DataType.FLOAT_VECTOR, description='audio embeddings', dim=DIM),
    FieldSchema(name='path', dtype=DataType.VARCHAR, description='audio path', max_length=500)
    ]
schema = CollectionSchema(fields=fields, description='audio fingerprints')

if utility.has_collection(COLLECTION_NAME):
    collection = Collection(COLLECTION_NAME)
    collection.drop() # drop collection if it exists
    
collection = Collection(name=COLLECTION_NAME, schema=schema)

# Create index
index_params = {
    'metric_type': METRIC_TYPE,
    'index_type': INDEX_TYPE,
    'params':{"nlist":2048}
}

# the following line takes forever to run
status = collection.create_index(field_name='embedding', index_params=index_params)

Here is what Milvus is logging as create_index runs:

...
milvus-standalone | [2023/04/26 12:33:40.211 +00:00] [DEBUG] [proxy/impl.go:1742] ["DescribeIndex received"] [traceID=66b3ef4224d23663] [role=proxy] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:40.219 +00:00] [DEBUG] [allocator/id.go:140] ["IDAllocator pickCanDoFunc"] [need=1] [total=199985] [remainReqCnt=0]
milvus-standalone | [2023/04/26 12:33:40.220 +00:00] [DEBUG] [proxy/impl.go:1773] ["DescribeIndex enqueued"] [traceID=66b3ef4224d23663] [role=proxy] [MsgID=441060534750543888] [BeginTs=441060535878025217] [EndTs=441060535878025217] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:40.224 +00:00] [DEBUG] [rootcoord/root_coord.go:1880] [DescribeIndex] [role=rootcoord] ["collection name"=nnfp] ["field name"=] [msgID=441060534750543888]
milvus-standalone | [2023/04/26 12:33:40.228 +00:00] [DEBUG] [rootcoord/root_coord.go:1905] ["DescribeIndex success"] [role=rootcoord] ["collection name"=nnfp] ["field name"=] ["index names"="[_default_idx]"] [msgID=441060534750543888]
milvus-standalone | [2023/04/26 12:33:40.231 +00:00] [DEBUG] [proxy/impl.go:1816] ["DescribeIndex done"] [traceID=66b3ef4224d23663] [role=proxy] [MsgID=441060534750543888] [BeginTs=441060535878025217] [EndTs=441060535878025217] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:40.747 +00:00] [DEBUG] [proxy/impl.go:1742] ["DescribeIndex received"] [traceID=2fa65ff3f830ddf6] [role=proxy] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:40.751 +00:00] [DEBUG] [allocator/id.go:140] ["IDAllocator pickCanDoFunc"] [need=1] [total=199984] [remainReqCnt=0]
milvus-standalone | [2023/04/26 12:33:40.751 +00:00] [DEBUG] [proxy/impl.go:1773] ["DescribeIndex enqueued"] [traceID=2fa65ff3f830ddf6] [role=proxy] [MsgID=441060534750543889] [BeginTs=441060536022466562] [EndTs=441060536022466562] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:40.754 +00:00] [DEBUG] [rootcoord/root_coord.go:1880] [DescribeIndex] [role=rootcoord] ["collection name"=nnfp] ["field name"=] [msgID=441060534750543889]
milvus-standalone | [2023/04/26 12:33:40.754 +00:00] [DEBUG] [rootcoord/root_coord.go:1905] ["DescribeIndex success"] [role=rootcoord] ["collection name"=nnfp] ["field name"=] ["index names"="[_default_idx]"] [msgID=441060534750543889]
milvus-standalone | [2023/04/26 12:33:40.756 +00:00] [DEBUG] [proxy/impl.go:1816] ["DescribeIndex done"] [traceID=2fa65ff3f830ddf6] [role=proxy] [MsgID=441060534750543889] [BeginTs=441060536022466562] [EndTs=441060536022466562] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:41.270 +00:00] [DEBUG] [proxy/impl.go:1742] ["DescribeIndex received"] [traceID=145563f16f72ffad] [role=proxy] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:41.274 +00:00] [DEBUG] [allocator/id.go:140] ["IDAllocator pickCanDoFunc"] [need=1] [total=199983] [remainReqCnt=0]
milvus-standalone | [2023/04/26 12:33:41.275 +00:00] [DEBUG] [proxy/impl.go:1773] ["DescribeIndex enqueued"] [traceID=145563f16f72ffad] [role=proxy] [MsgID=441060534750543890] [BeginTs=441060536153276418] [EndTs=441060536153276418] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:41.278 +00:00] [DEBUG] [rootcoord/root_coord.go:1880] [DescribeIndex] [role=rootcoord] ["collection name"=nnfp] ["field name"=] [msgID=441060534750543890]
milvus-standalone | [2023/04/26 12:33:41.279 +00:00] [DEBUG] [rootcoord/root_coord.go:1905] ["DescribeIndex success"] [role=rootcoord] ["collection name"=nnfp] ["field name"=] ["index names"="[_default_idx]"] [msgID=441060534750543890]
milvus-standalone | [2023/04/26 12:33:41.281 +00:00] [DEBUG] [proxy/impl.go:1816] ["DescribeIndex done"] [traceID=145563f16f72ffad] [role=proxy] [MsgID=441060534750543890] [BeginTs=441060536153276418] [EndTs=441060536153276418] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:41.805 +00:00] [DEBUG] [proxy/impl.go:1742] ["DescribeIndex received"] [traceID=7684d69bea24aba4] [role=proxy] [db=] [collection=nnfp] [field=] ["index name"=]
milvus-standalone | [2023/04/26 12:33:41.815 +00:00] [DEBUG] [allocator/id.go:140] ["IDAllocator pickCanDoFunc"] [need=1] [total=199982] [remainReqCnt=0]
milvus-standalone | [2023/04/26 12:33:41.816 +00:00] [DEBUG] [proxy/impl.go:1773] ["DescribeIndex enqueued"] [traceID=7684d69bea24aba4] [role=proxy] [MsgID=441060534750543891] [BeginTs=441060536299552769] [EndTs=441060536299552769] [db=] [collection=nnfp] [field=] ["index name"=]
...

RuntimeError: Load operator failed

p = (
     pipe.input('path')
     .map('path', 'img', ops.image_decode.cv2('rgb'))
     .map('img', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch16', modality='image'))
     .map('vec', 'vec', lambda x: x / np.linalg.norm(x))
     .output('img', 'vec') 
)
DataCollection(p('./teddy.png')).show()

Towhee version: 0.9.1.dev111

Downloading https://towhee.io/image-text-embedding/clip/resolve/branch/main/README.md to /Users/jn/.towhee/operators/image-text-embedding/clip/files: 100%|██████████| 5.20k/5.20k [00:00<00:00, 968kB/s]
Downloading https://towhee.io/image-text-embedding/clip/resolve/branch/main/tabular2.png to /Users/jn/.towhee/operators/image-text-embedding/clip/files: 100%|██████████| 22.9k/22.9k [00:00<00:00, 3.03MB/s]
Downloading https://towhee.io/image-text-embedding/clip/resolve/branch/main/tabular1.png to /Users/jn/.towhee/operators/image-text-embedding/clip/files: 100%|██████████| 189k/189k [00:00<00:00, 1.45MB/s]
Downloading https://towhee.io/image-text-embedding/clip/resolve/branch/main/vec1.png to /Users/jn/.towhee/operators/image-text-embedding/clip/files: 100%|██████████| 13.3k/13.3k [00:00<00:00, 793kB/s]
2023-05-11 03:21:10,276 - 4730060288 - operator_loader.py-operator_loader:131 - ERROR: HTTPSConnectionPool(host='towhee.io', port=443): Max retries exceeded with url: /image-text-embedding/clip/resolve/branch/main/.gitattributes (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))), Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 700, in urlopen
self._prepare_proxy(conn)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 996, in _prepare_proxy
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/urllib3/connection.py", line 419, in connect
self.sock = ssl_wrap_socket(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1040, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1129)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='towhee.io', port=443): Max retries exceeded with url: /image-text-embedding/clip/resolve/branch/main/.gitattributes (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/operator_manager/operator_loader.py", line 128, in _load_operator_from_hub
path = get_operator(operator=function, tag=tag, latest=latest)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/hub/init.py", line 23, in get_operator
return _CACHE_MANAGER.get_operator(operator, tag, install_reqs, latest)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/hub/cache_manager.py", line 97, in get_operator
download_operator(author, repo, tag, download_path, install_reqs, latest)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/hub/downloader.py", line 171, in download_operator
_Downloader(fs).download()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/hub/downloader.py", line 158, in download
_ = [i.result() for i in futures]
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/hub/downloader.py", line 158, in
_ = [i.result() for i in futures]
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 445, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/hub/downloader.py", line 136, in download_url_to_file
with requests.get(url, stream=True) as r:
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/requests/adapters.py", line 563, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='towhee.io', port=443): Max retries exceeded with url: /image-text-embedding/clip/resolve/branch/main/.gitattributes (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))


RuntimeError Traceback (most recent call last)
Cell In[6], line 2
1 p = (
----> 2 pipe.input('path')
3 .map('path', 'img', ops.image_decode.cv2('rgb'))
4 .map('img', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch16', modality='image'))
5 .map('vec', 'vec', lambda x: x / np.linalg.norm(x))
6 .output('img', 'vec')
7 )
9 DataCollection(p('./teddy.png')).show()

File /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/pipeline.py:101, in Pipeline.output(self, *output_schema)
98 dag_dict[self._clo_node]['next_nodes'].append(uid)
100 run_pipe = RuntimePipeline(dag_dict)
--> 101 run_pipe.preload()
102 return run_pipe

File /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/runtime_pipeline.py:147, in RuntimePipeline.preload(self)
143 def preload(self):
144 """
145 Preload the operators.
146 """
--> 147 return _Graph(self._dag_repr.nodes, self._dag_repr.edges, self._operator_pool, self._thread_pool, TimeProfiler(False))

File /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/runtime_pipeline.py:65, in _Graph.init(self, nodes, edges, operator_pool, thread_pool, time_profiler, trace_edges)
63 self.features = None
64 self._time_profiler.record(Event.pipe_name, Event.pipe_in)
---> 65 self.initialize()
66 self._input_queue = self._data_queues[0]

File /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/runtime_pipeline.py:81, in _Graph.initialize(self)
79 node = create_node(self._nodes[name], self._operator_pool, in_queues, out_queues, self._time_profiler)
80 if not node.initialize():
---> 81 raise RuntimeError(node.err_msg)
82 self._node_runners.append(node)

RuntimeError: Node-image-text-embedding/clip-1 runs failed, error msg: Create image-text-embedding/clip-1 operator image-text-embedding/clip:main with args None and kws {'model_name': 'clip_vit_base_patch16', 'modality': 'image'} failed, err: Load operator failed, Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/nodes/node.py", line 88, in initialize
self._op = self._op_pool.acquire_op(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/operator_manager/operator_pool.py", line 101, in acquire_op
op = self._op_loader.load_operator(hub_op_id, op_args, op_kws, tag, latest)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/operator_manager/operator_loader.py", line 155, in load_operator
raise RuntimeError('Load operator failed')
RuntimeError: Load operator failed

TypeError: can't pickle _thread.lock objects

Environment:

  • Milvus version: 2.2.8
  • Deployment mode(standalone or cluster):standalone
  • MQ type(rocksmq, pulsar or kafka): no
  • SDK version(e.g. pymilvus v2.0.0rc2):2.1.0
  • OS(Ubuntu or CentOS): CentOS7.9
  • CPU/Memory: 128GB
  • GPU: 24GB
  • Others:

Current Behavior:
When I run the default example whitch is located in "examples/image/reverse_image_search/1_build_image_search_engine.ipynb", it throws an exception
https://user-images.githubusercontent.com/6244724/240831658-2a5a480c-932d-4c59-b9be-fb22dfdb82b6.png

how to solve this problem about the default towhee example ?

Can Milvus report an error when linking to the wrong ip, or when it fails to connect?

from pymilvus import connections, FieldSchema, CollectionSchema, DataType, Collection, utility

def create_milvus_collection(collection_name, dim):
    connections.connect(host='[10.97.4.101](http://10.97.4.101/)', port='19530')

    if utility.has_collection(collection_name):
        utility.drop_collection(collection_name)

    fields = [
    FieldSchema(name='id', dtype=DataType.INT64, descrition='ids', is_primary=True, auto_id=False),
    FieldSchema(name='embedding', dtype=DataType.FLOAT_VECTOR, descrition='embedding vectors', dim=dim)
    ]
    schema = CollectionSchema(fields=fields, description='reverse image search')
    collection = Collection(name=collection_name, schema=schema)

    # create IVF_FLAT index for collection.
    index_params = {
        'metric_type':'L2',
        'index_type':"IVF_FLAT",
        'params':{"nlist":2048}
    }
    collection.create_index(field_name="embedding", index_params=index_params)
return collection

期待以文搜图应用示例可以支持中文

通过中文文本搜索图片这个想法已经在我的想法清单中列了好久。
很苦恼的地方在于,自己手动训练模型的话,需要投入很多时间,做了很多调研,最终没有花时间去做。
很开心,看到towhee提供以文搜图的应用示例。
比较遗憾的是,目前只支持英文,期待能支持中文搜索图片

ModuleNotFoundError: No module named 'towhee.models.utils.video_transforms'

Hello~
I am trying the example video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb, but an error occurred that:
My towhee version is 1.0.0. Dose any one konw how to solve the problem?

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb 单元格 11 in 8
      [3](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=2) from towhee.datacollection import DataCollection
      [5](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=4) collection = create_milvus_collection('timesformer', 768)
      [7](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=6) insert_pipe = (
----> [8](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=7)     pipe.input('csv_path')
      [9](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=8)         .flat_map('csv_path', ('id', 'path', 'label'), read_csv)
     [10](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=9)         .map('id', 'id', lambda x: int(x))
     [11](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=10)         .map('path', 'frames', ops.video_decode.ffmpeg(sample_type='uniform_temporal_subsample', args={'num_samples': 8}))
     [12](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=11)         .map('frames', ('labels', 'scores', 'features'), ops.action_classification.timesformer(skip_preprocess=True))
     [13](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=12)         .map('features', 'features', ops.towhee.np_normalize())
     [14](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=13)         .map(('id', 'features'), 'insert_res', ops.ann_insert.milvus_client(host='127.0.0.1', port='19530', collection_name='timesformer'))
     [15](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=14)         .output()
     [16](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=15) )
     [18](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=17) insert_pipe('reverse_video_search.csv')
     [20](vscode-notebook-cell://ssh-remote%2B7b22686f73744e616d65223a2253485445435f646576656c6f70227d/home/image_check/code/examples/video/reverse_video_search/2_deep_dive_reverse_video_search.ipynb#X13sdnNjb2RlLXJlbW90ZQ%3D%3D?line=19) collection.load()

File [~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/pipeline.py:116](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2253485445435f646576656c6f70227d.vscode-resource.vscode-cdn.net/home/image_check/code/examples/video/reverse_video_search/~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/pipeline.py:116), in Pipeline.output(self, *output_schema, **config_kws)
    113 dag_dict[self._clo_node]['next_nodes'].append(uid)
    115 run_pipe = RuntimePipeline(dag_dict, config=config_kws)
--> 116 run_pipe.preload()
    117 return run_pipe

File [~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/runtime_pipeline.py:140](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a2253485445435f646576656c6f70227d.vscode-resource.vscode-cdn.net/home/image_check/code/examples/video/reverse_video_search/~/anaconda3/envs/towhee/lib/python3.8/site-packages/towhee/runtime/runtime_pipeline.py:140), in RuntimePipeline.preload(self)
...
    from .timesformer import Timesformer
  File "/home/image_check/.towhee/operators/action-classification/timesformer/versions/main/timesformer.py", line 13, in 
    from towhee.models.utils.video_transforms import transform_video
ModuleNotFoundError: No module named 'towhee.models.utils.video_transforms'

module 'towhee' has no attribute 'read_csv'

Hello,l have try the example "image/reverse_image_search/2_deep_dive_image_search.ipynb", but an error occurred as follow. My towhee version is 1.1.0, Dose any body know how to solve this problem?

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[4], line 14
     10 collection = create_milvus_collection(model, model_dim[model])
     12 with Timer(f'{model} load'):
     13     ( 
---> 14         towhee.read_csv('reverse_image_search.csv')
     15             .runas_op['id', 'id'](func=lambda x: int(x))
     16             .image_decode['path', 'img']()
     17             .image_embedding.timm['img', 'vec'](model_name=model)
     18             .tensor_normalize['vec', 'vec']()
     19             .to_milvus['id', 'vec'](collection=collection, batch=100)
     20     )
     21 with Timer(f'{model} query'):
     22     ( towhee.glob['path']('[./test/](https://vscode-remote+ssh-002dremote-002bheji-005frtx6000.vscode-resource.vscode-cdn.net/home/heji_rtx6000/code/examples_towhee2/image/reverse_image_search/test/)*/*.JPEG')
     23             .image_decode['path', 'img']()
     24             .image_embedding.timm['img', 'vec'](model_name=model)
   (...)
     31             .report()
     32     )

AttributeError: module 'towhee' has no attribute 'read_csv'

[Text Image Search] AttributeError: 'DataQueue' object has no attribute 'to_list'

Hi there :-)

Using milvus 2.0.2, and pymilvus is 2.0.2. Also using towhee 0.9.0 , pandas 2.0.1 and numpy 1.24.3

I got a new problem when I was running the program "Text Image Search" from examples. While running the step "Release a Showcase", an error occurred like:

Traceback (most recent call last):
  File "/home/liu/anaconda3/envs/py39/lib/python3.9/site-packages/gradio/routes.py", line 401, in run_predict
    output = await app.get_blocks().process_api(
  File "/home/liu/anaconda3/envs/py39/lib/python3.9/site-packages/gradio/blocks.py", line 1302, in process_api
    result = await self.call_function(
  File "/home/liu/anaconda3/envs/py39/lib/python3.9/site-packages/gradio/blocks.py", line 1025, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/home/liu/anaconda3/envs/py39/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/home/liu/anaconda3/envs/py39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "/home/liu/anaconda3/envs/py39/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "/tmp/ipykernel_606/2799569186.py", line 14, in search
    image_ids = search_pipeline(text).to_list()[0][0]
AttributeError: 'DataQueue' object has no attribute 'to_list'

And the code "Release a Showcase" is below:

search_pipeline = (
    pipe.input('text')
    .map('text', 'vec', ops.image_text_embedding.clip(model_name='clip_vit_base_patch16', modality='text'))
    .map('vec', 'vec', lambda x: x / np.linalg.norm(x))
    .map('vec', 'result', ops.ann_search.milvus_client(host='127.0.0.1', port='19530', collection_name='text_image_search', limit=5))
    .map('result', 'image_ids', lambda x: [item[0] for item in x])
    .output('image_ids')
)

def search(text):
    df = pd.read_csv('reverse_image_search.csv')
    id_img = df.set_index('id')['path'].to_dict()
    imgs = []
    image_ids = search_pipeline(text).to_list()[0][0]
    return [id_img[image_id] for image_id in image_ids]

import gradio

interface = gradio.Interface(search, 
                             gradio.inputs.Textbox(lines=1),
                             [gradio.outputs.Image(type="filepath", label=None) for _ in range(5)]
                            )

interface.launch(inline=True, share=True)

P.s. I once deleted the folder "~/.towhee/operator/ann-insert/milvus-client/" before because of #192 , and I don't know if this has any impact on this issue?

code needs to be modified

import cv2
from towhee.types.image import Image

def read_images(img_paths):
imgs = []
for p in img_paths:
imgs.append(Image(cv2.imread(p), 'BGR'))
return imgs

p_search_img = (
p_search_pre.map('pred', 'pred images', read_images)
.output('img', 'pred images')
)
DataCollection(p_search_img('test/goldfish/*.JPEG')).show()

needs to be updated from pred images to pred_images

towhee pipe error


TypeError Traceback (most recent call last)
Cell In[5], line 15
11 yield item
13 # Embedding pipeline
14 p_embed = (
---> 15 pipe.input('src')
16 .flat_map('src', 'img_path', load_image)
17 .map('img_path', 'img', ops.image_decode())
18 .map('img', 'vec', ops.image_embedding.timm(model_name=MODEL, device=DEVICE))
19 )

File ~/Desktop/Projects/vector_database/milvus/venv/lib/python3.8/site-packages/towhee/runtime/pipeline.py:143, in Pipeline.map(self, input_schema, output_schema, fn, config)
141 uid = uuid.uuid4().hex
142 fn_action = self._to_action(fn)
--> 143 dag_dict = deepcopy(self._dag)
144 dag_dict[uid] = {
145 'inputs': input_schema,
146 'outputs': output_schema,
(...)
153 'next_nodes': [],
154 }
155 dag_dict[self._clo_node]['next_nodes'].append(uid)

File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):

File /usr/lib/python3.8/copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y

File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):

File /usr/lib/python3.8/copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y

File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):

File /usr/lib/python3.8/copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y

File /usr/lib/python3.8/copy.py:172, in deepcopy(x, memo, _nil)
170 y = x
171 else:
--> 172 y = _reconstruct(x, memo, *rv)
174 # If is its own copy, don't memoize.
175 if y is not x:

File /usr/lib/python3.8/copy.py:270, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
268 if state is not None:
269 if deep:
--> 270 state = deepcopy(state, memo)
271 if hasattr(y, 'setstate'):
272 y.setstate(state)

File /usr/lib/python3.8/copy.py:146, in deepcopy(x, memo, _nil)
144 copier = _deepcopy_dispatch.get(cls)
145 if copier is not None:
--> 146 y = copier(x, memo)
147 else:
148 if issubclass(cls, type):

File /usr/lib/python3.8/copy.py:230, in _deepcopy_dict(x, memo, deepcopy)
228 memo[id(x)] = y
229 for key, value in x.items():
--> 230 y[deepcopy(key, memo)] = deepcopy(value, memo)
231 return y

File /usr/lib/python3.8/copy.py:161, in deepcopy(x, memo, _nil)
159 reductor = getattr(x, "reduce_ex", None)
160 if reductor is not None:
--> 161 rv = reductor(4)
162 else:
163 reductor = getattr(x, "reduce", None)

TypeError: cannot pickle '_thread.lock' object

collection.num_entities doesn't match the number of insert Data

Hello,
Im trying to follow the tuto here https://github.com/towhee-io/examples/blob/main/image/reverse_image_search/1_build_image_search_engine.ipynb
But when i test with 1K images and insert to milvus, the collection.num_entities gives me only 68 inserted. do you know what's wrong?

the code :
collection = create_milvus_collection('reverse_image_search', 128)

dc = (
towhee.glob'path'[:1000]
.image_decode'path', 'img'
.image_embedding.timm'img', 'vec'
.ann_insert.milvus('path', 'vec'), 'mr'

)

Torch not compiled with CUDA enabled

Node-image-text-embedding/clip-2 runs failed, error msg: Create image-text-embedding/clip-2 operator image-text-embedding/clip:main with args None and kws {'model_name': 'clip_vit_base_patch16', 'modality': 'image', 'device': 0} failed, err: Torch not compiled with CUDA enabled, Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/towhee-0.9.1.dev111-py3.9.egg/towhee/runtime/nodes/node.py", line 88

in initialize, system: macOS Big Sur ,cuda: Intel Iris 1536 MB

milvus-standalone error log found with the 'Text-Video Retrieval' notebook

Hi, I'm new to towhee and want to have a try on the https://github.com/towhee-io/examples/blob/main/video/text_video_retrieval/1_text_video_retrieval_engine.ipynb, but the following block is running for hours:

collection = create_milvus_collection('text_video_retrieval', 512)

Found error log in docker logs milvus-standalone | grep ERROR

[2023/04/13 08:38:11.691 +00:00] [ERROR] [root_coord.go:1406] ["DescribeCollection failed"] [role=rootcoord] ["collection name"=text_video_retrieval] [msgID=440665421381737694] [error="can't find collection: text_video_retrieval"] [stack="github.com/milvus-io/milvus/internal/rootcoord.(*Core).DescribeCollection\n\t/go/src/github.com/milvus-io/milvus/internal/rootcoord/root_coord.go:1406\ngithub.com/milvus-io/milvus/internal/distributed/rootcoord.(*Server).DescribeCollection\n\t/go/src/github.com/milvus-io/milvus/internal/distributed/rootcoord/service.go:354\ngithub.com/milvus-io/milvus/internal/proto/rootcoordpb._RootCoord_DescribeCollection_Handler.func1\n\t/go/src/github.com/milvus-io/milvus/internal/proto/rootcoordpb/root_coord.pb.go:915\ngithub.com/grpc-ecosystem/go-grpc-middleware/tracing/opentracing.UnaryServerInterceptor.func1\n\t/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/tracing/opentracing/server_interceptors.go:38\ngithub.com/milvus-io/milvus/internal/proto/rootcoordpb._RootCoord_DescribeCollection_Handler\n\t/go/src/github.com/milvus-io/milvus/internal/proto/rootcoordpb/root_coord.pb.go:917\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/[email protected]/server.go:1286\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/[email protected]/server.go:1609\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/[email protected]/server.go:934"]

docker ps:

docker ps                                                                                
CONTAINER ID   IMAGE                                      COMMAND                  CREATED         STATUS                PORTS                       NAMES
a6490a7c84fe   milvusdb/milvus:v2.0.2                     "/tini -- milvus run…"   4 days ago      Up 4 days             0.0.0.0:19530->19530/tcp    milvus-standalone
781dfb9c3f61   quay.io/coreos/etcd:v3.5.0                 "etcd -advertise-cli…"   4 days ago      Up 4 days             2379-2380/tcp               milvus-etcd
182fe7d79592   minio/minio:RELEASE.2020-12-03T00-03-10Z   "/usr/bin/docker-ent…"   4 days ago      Up 4 days (healthy)   9000/tcp                    milvus-minio

conda list:

# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main
_openmp_mutex             5.1                       1_gnu
aiofiles                  23.1.0                   pypi_0    pypi
aiohttp                   3.8.4                    pypi_0    pypi
aiosignal                 1.3.1                    pypi_0    pypi
altair                    4.2.2                    pypi_0    pypi
anyio                     3.6.2                    pypi_0    pypi
asttokens                 2.2.1              pyhd8ed1ab_0    conda-forge
async-timeout             4.0.2                    pypi_0    pypi
attrs                     22.2.0                   pypi_0    pypi
av                        10.0.0                   pypi_0    pypi
backcall                  0.2.0              pyh9f0ad1d_0    conda-forge
backports                 1.0                pyhd8ed1ab_3    conda-forge
backports.functools_lru_cache 1.6.4              pyhd8ed1ab_0    conda-forge
bleach                    6.0.0                    pypi_0    pypi
bzip2                     1.0.8                h7b6447c_0
ca-certificates           2022.12.7            ha878542_0    conda-forge
certifi                   2022.12.7          pyhd8ed1ab_0    conda-forge
cffi                      1.15.1                   pypi_0    pypi
charset-normalizer        3.1.0                    pypi_0    pypi
click                     8.1.3                    pypi_0    pypi
contourpy                 1.0.7                    pypi_0    pypi
cryptography              40.0.1                   pypi_0    pypi
cycler                    0.11.0                   pypi_0    pypi
debugpy                   1.5.1           py310h295c915_0
decorator                 5.1.1              pyhd8ed1ab_0    conda-forge
docutils                  0.19                     pypi_0    pypi
entrypoints               0.4                pyhd8ed1ab_0    conda-forge
executing                 1.2.0              pyhd8ed1ab_0    conda-forge
fastapi                   0.95.0                   pypi_0    pypi
ffmpy                     0.3.0                    pypi_0    pypi
filelock                  3.11.0                   pypi_0    pypi
fonttools                 4.39.3                   pypi_0    pypi
frozenlist                1.3.3                    pypi_0    pypi
fsspec                    2023.4.0                 pypi_0    pypi
gradio                    3.24.1                   pypi_0    pypi
gradio-client             0.0.8                    pypi_0    pypi
grpcio                    1.47.5                   pypi_0    pypi
grpcio-tools              1.47.5                   pypi_0    pypi
h11                       0.14.0                   pypi_0    pypi
httpcore                  0.16.3                   pypi_0    pypi
httpx                     0.23.3                   pypi_0    pypi
huggingface-hub           0.13.4                   pypi_0    pypi
idna                      3.4                      pypi_0    pypi
importlib-metadata        6.2.0                    pypi_0    pypi
ipykernel                 6.15.0             pyh210e3f2_0    conda-forge
ipython                   8.12.0             pyh41d4057_0    conda-forge
jaraco-classes            3.2.3                    pypi_0    pypi
jedi                      0.18.2             pyhd8ed1ab_0    conda-forge
jeepney                   0.8.0                    pypi_0    pypi
jinja2                    3.1.2                    pypi_0    pypi
jsonschema                4.17.3                   pypi_0    pypi
jupyter_client            7.3.4              pyhd8ed1ab_0    conda-forge
jupyter_core              5.3.0           py310hff52083_0    conda-forge
keyring                   23.13.1                  pypi_0    pypi
kiwisolver                1.4.4                    pypi_0    pypi
ld_impl_linux-64          2.38                 h1181459_1
libffi                    3.3                  he6710b0_2
libgcc-ng                 11.2.0               h1234567_1
libgomp                   11.2.0               h1234567_1
libsodium                 1.0.18               h36c2ea0_1    conda-forge
libstdcxx-ng              11.2.0               h1234567_1
libuuid                   1.41.5               h5eee18b_0
linkify-it-py             2.0.0                    pypi_0    pypi
markdown-it-py            2.2.0                    pypi_0    pypi
markupsafe                2.1.2                    pypi_0    pypi
matplotlib                3.7.1                    pypi_0    pypi
matplotlib-inline         0.1.6              pyhd8ed1ab_0    conda-forge
mdit-py-plugins           0.3.3                    pypi_0    pypi
mdurl                     0.1.2                    pypi_0    pypi
mmh3                      3.0.0                    pypi_0    pypi
more-itertools            9.1.0                    pypi_0    pypi
multidict                 6.0.4                    pypi_0    pypi
ncurses                   6.4                  h6a678d5_0
nest-asyncio              1.5.6              pyhd8ed1ab_0    conda-forge
numpy                     1.24.2                   pypi_0    pypi
openssl                   1.1.1t               h7f8727e_0
orjson                    3.8.9                    pypi_0    pypi
packaging                 23.0               pyhd8ed1ab_0    conda-forge
pandas                    2.0.0                    pypi_0    pypi
parso                     0.8.3              pyhd8ed1ab_0    conda-forge
pexpect                   4.8.0              pyh1a96a4e_2    conda-forge
pgzip                     0.3.4                    pypi_0    pypi
pickleshare               0.7.5                   py_1003    conda-forge
pillow                    9.5.0                    pypi_0    pypi
pip                       23.0.1          py310h06a4308_0
pkginfo                   1.9.6                    pypi_0    pypi
platformdirs              3.2.0              pyhd8ed1ab_0    conda-forge
prompt-toolkit            3.0.38             pyha770c72_0    conda-forge
prompt_toolkit            3.0.38               hd8ed1ab_0    conda-forge
protobuf                  3.20.3                   pypi_0    pypi
psutil                    5.9.0           py310h5eee18b_0
ptyprocess                0.7.0              pyhd3deb0d_0    conda-forge
pure_eval                 0.2.2              pyhd8ed1ab_0    conda-forge
pyarrow                   11.0.0                   pypi_0    pypi
pycparser                 2.21                     pypi_0    pypi
pydantic                  1.10.7                   pypi_0    pypi
pydub                     0.25.1                   pypi_0    pypi
pygit2                    1.10.1                   pypi_0    pypi
pygments                  2.14.0             pyhd8ed1ab_0    conda-forge
pymilvus                  2.2.4                    pypi_0    pypi
pyparsing                 3.0.9                    pypi_0    pypi
pyrsistent                0.19.3                   pypi_0    pypi
python                    3.10.0               h12debd9_5
python-dateutil           2.8.2              pyhd8ed1ab_0    conda-forge
python-multipart          0.0.6                    pypi_0    pypi
python_abi                3.10                    2_cp310    conda-forge
pytz                      2023.3                   pypi_0    pypi
pyyaml                    6.0                      pypi_0    pypi
pyzmq                     23.2.0          py310h6a678d5_0
readline                  8.2                  h5eee18b_0
readme-renderer           37.3                     pypi_0    pypi
requests                  2.28.2                   pypi_0    pypi
requests-toolbelt         0.10.1                   pypi_0    pypi
rfc3986                   1.5.0                    pypi_0    pypi
rich                      13.3.3                   pypi_0    pypi
secretstorage             3.3.3                    pypi_0    pypi
semantic-version          2.10.0                   pypi_0    pypi
setuptools                65.6.3          py310h06a4308_0
six                       1.16.0             pyh6c4a22f_0    conda-forge
sniffio                   1.3.0                    pypi_0    pypi
sqlite                    3.41.1               h5eee18b_0
stack_data                0.6.2              pyhd8ed1ab_0    conda-forge
starlette                 0.26.1                   pypi_0    pypi
tabulate                  0.9.0                    pypi_0    pypi
tk                        8.6.12               h1ccaba5_0
toolz                     0.12.0                   pypi_0    pypi
tornado                   6.1             py310h5764c6d_3    conda-forge
towhee                    0.9.0                    pypi_0    pypi
towhee-models             0.9.0                    pypi_0    pypi
tqdm                      4.65.0                   pypi_0    pypi
traitlets                 5.9.0              pyhd8ed1ab_0    conda-forge
twine                     4.0.2                    pypi_0    pypi
typing-extensions         4.5.0                hd8ed1ab_0    conda-forge
typing_extensions         4.5.0              pyha770c72_0    conda-forge
tzdata                    2023.3                   pypi_0    pypi
uc-micro-py               1.0.1                    pypi_0    pypi
ujson                     5.4.0                    pypi_0    pypi
urllib3                   1.26.15                  pypi_0    pypi
uvicorn                   0.21.1                   pypi_0    pypi
wcwidth                   0.2.6              pyhd8ed1ab_0    conda-forge
webencodings              0.5.1                    pypi_0    pypi
websockets                11.0.1                   pypi_0    pypi
wheel                     0.38.4          py310h06a4308_0
xz                        5.2.10               h5eee18b_1
yarl                      1.8.2                    pypi_0    pypi
zeromq                    4.3.4                h9c3ff4c_1    conda-forge
zipp                      3.15.0                   pypi_0    pypi
zlib                      1.2.13               h5eee18b_0

TypeError: cannot pickle '_thread.lock' object

我在运行search_article_in_medium.ipynb文件
执行下面代码的时候
from towhee import ops, pipe, DataCollection

insert_pipe = (pipe.input('df')
.flat_map('df', 'data', lambda df: df.values.tolist())
.map('data', 'res', ops.ann_insert.milvus_client(host='127.0.0.1',
port='19530',
collection_name='search_article_in_medium'))
.output('res')
)
错误信息:
Traceback (most recent call last):
File "D:\IT\python\Python38\lib\site-packages\IPython\core\interactiveshell.py", line 3398, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 3, in <cell line: 3>
insert_pipe = (pipe.input('df')
File "D:\IT\python\Python38\lib\site-packages\towhee\runtime\pipeline.py", line 103, in output
dag_dict = deepcopy(self._dag)
File "D:\IT\python\Python38\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\IT\python\Python38\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\IT\python\Python38\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\IT\python\Python38\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\IT\python\Python38\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\IT\python\Python38\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\IT\python\Python38\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "D:\IT\python\Python38\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "D:\IT\python\Python38\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "D:\IT\python\Python38\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "D:\IT\python\Python38\lib\copy.py", line 161, in deepcopy
rv = reductor(4)
TypeError: cannot pickle '_thread.lock' object

请问下是什么原因,怎么解决
我的towhee版本是 0.9.0

Read image Error failed

2023-08-03 14:01:57,823 - 139852063110912 - image_decode_cv2.py-image_decode_cv2:40 - ERROR: Download image from http://904d20239a385ec9.jpg failed, error msg: {"Code":"NoSuchURL","Msg":"The specified URL does not exist."}, request code: 404
2023-08-03 14:01:57,823 - 139852063110912 - image_decode_cv2.py-image_decode_cv2:68 - ERROR: Read image http://
/904d20239a385ec9.jpg failed
when i use pipe to get image feature,if the url link failure,The pipe program will jam without export nothing
image

[Bug] Build a Video Classification System in 5 Lines raise an error"IndexError: list index out of range"

import towhee

(
    towhee.glob['path']('./train/tap_dancing/*.mp4').unstream()
          .video_decode.ffmpeg['path', 'frames'](sample_type='uniform_temporal_subsample', args={'num_samples': 16})
          .action_classification['frames', ('predicts', 'scores', 'features')].pytorchvideo(
              model_name='x3d_m', skip_preprocess=True, topk=5)
          .select['path', 'predicts', 'scores']()
          .show()
)

截屏2022-06-16 11 39 57

towhee version

towhee-0.7.0-py3-none-any.whl
towhee.models-0.7.0-py3-none-any.whl

The above error was reported even though the relevant cache had been cleared locally

rm -rf ~/.cache/torch/hub/checkpoints/
 rm -rf ~/.towhee/

24-hour audio using embedding model nnfp

Hi , I have a 24-hour audio , using NNFP for processing , reporting the error of not having enough memory . How can I fix the problem ? Here is the code.

emb_pipe = (
pipe.input('url')
.map('url', 'frames', ops.audio_decode.ffmpeg())
.map('frames', 'emb', ops.audio_embedding.nnfp())
)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.