Code Monkey home page Code Monkey logo

facegood-audio2face's People

Contributors

jelowang avatar ptannor avatar sznero avatar wyxogo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facegood-audio2face's Issues

Difference between dataSet1 and dataSetx?

Hi, what is the difference between dataSet1 and dataSetx?

Does it mean different people? Could we combine all data to train and get a person-independent model ?

Thanks!

找到不到开源的ExportBsWeights.py

你好,感谢开源驱动代码;
已经购买facegood的设备。想从maya基于你们开源的代码导出bs权重,但是从代码里面没有找到ExportBsWeights.py代码?可以提供一下吗?

import lib.socket.ue4_socket as ue4

Traceback (most recent call last):
File "f:/Audio2Face-main/code/test/AiSpeech/zsmeif.py", line 51, in
import lib.socket.ue4_socket as ue4
ModuleNotFoundError: No module named 'lib.socket'

thanks!

训练loss不收敛

感谢您分享这个项目,我录制了大约2小时的数据,利用facecap提取了对应的音频和bs,但是训练出来以后效果很差,在数据上检测了是对齐的,但是进行测试的时候动画效果很差,只有人物偶尔抖动唇部的动作,这个怎么解决呢?

拿歌曲(已提取人声)去测试有些小问题

我拿dataset16训练的,测试拿歌曲测的,歌曲 在1分25 本应该停止歌唱的,预测的 jawopen 没有收住,在接下来一秒左右还是有较大值的(也就是晚一秒嘴巴才会合上),请问这是什么原因?

得到blendshape weights 后如何驱动人脸呢

您好,感谢开源!
如题,模型得到blendshape weights 后,如何生成视频呢,有什么参考资料吗,新手刚上路
另外:

  1. step2_mb.py 文件是空的
  2. 除了支持卡通人物,还能支持真人吗(即用拍摄的视频来训练)

与云渲染结合使用

您好,我初步了解了下您的项目,觉得和我们的产品有较大的合作空间。

我们这边专注于云渲染技术,就是把 UE4,UNITY3D 等三维应用上云然后通过轻终端的浏览器等方式访问。

在我们云渲染产品里已经把语音输入,智能语音交互(Speech)等功能集成了,如果与我们的云渲染结合使用,您这边可以专注于算法和三维渲染。

对于高保真数组人的场景,上云渲染可以解决对终端算力的依赖。

我们的接入Demo 点这里

我准备先初步测试下,如果有深度合作的想法可以联系我。

关于驱动ue虚拟形象部分

请问ue能否和python运行在两个不同的计算机中并使用WebSocket通信?看了一下ue插件部分好像并没有给出源代码所以没法改(请问ue插件部分是否开源或者说有没有开源的参考呢?)

没有找到工程

之前github玩的比较少,/example/ueExample路径是在这个项目的下面吗?还是有其它的路径呢

Problem in downloading the Data from baidu cloud

Hi! I am a student from India researching audio2face and came across your project which was really incredible! However I am unable to open the baidu cloud link from India to download the training and testing data due to restriction on the website here. is there any way I can access the data? It will be of great help! Thanks

测试模型时报错:Error Main loop: HTTPSConnectionPool(host='api.talkinggenie.com', port=443): Max retries exceeded with url: /api/v2/public/authToken (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))

2022-08-08 16:43:18.557937: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
WARNING:tensorflow:From D:\anaconda3\envs\audio2face_lqy\lib\site-packages\tensorflow_core\python\compat\v2_compat.py:68: disable_resource_variables (from tensorflow.python.ops.variab
le_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
2022-08-08 16:43:20.328710: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2022-08-08 16:43:20.347819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
2022-08-08 16:43:20.356155: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2022-08-08 16:43:20.364649: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2022-08-08 16:43:20.373982: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2022-08-08 16:43:20.377125: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2022-08-08 16:43:20.390252: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2022-08-08 16:43:20.398360: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2022-08-08 16:43:20.407310: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2022-08-08 16:43:20.407496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2022-08-08 16:43:20.407963: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2022-08-08 16:43:20.409759: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
2022-08-08 16:43:20.409940: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2022-08-08 16:43:20.410032: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2022-08-08 16:43:20.410148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2022-08-08 16:43:20.410237: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2022-08-08 16:43:20.410325: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2022-08-08 16:43:20.410412: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2022-08-08 16:43:20.410500: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2022-08-08 16:43:20.410601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2022-08-08 16:43:20.998933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-08-08 16:43:20.999124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2022-08-08 16:43:20.999265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2022-08-08 16:43:20.999574: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9640 MB memory) -> ph
ysical GPU (device: 0, name: NVIDIA GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From D:\vedio2face\FACEGOOD-Audio2Face-main\FACEGOOD-Audio2Face-main\code\test\AiSpeech\lib\tensorflow\input_lpc_output_weight.py:20: FastGFile.init (from tenso
rflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
the cpus number is: 0
the cpus number is: 1
run main

Error Main loop: HTTPSConnectionPool(host='api.talkinggenie.com', port=443): Max retries exceeded with url: /api/v2/public/authToken (Caused by ProxyError('Cannot connect to proxy.',
OSError(0, 'Error')))

请问如何导出ARKIT的blendshape weights?

在流程图里是生成ARKIT的blendshape weight, 但是看训练数据是116个 blendshape weight。
请问能导出ARKIT的weights吗?
或者如何把116个的blendshape weights 转换到ARKIT的形式?

Invalid input device (no default output device)

你好,我安装了使用python3.6 安装pyaudio后在终端运行后是No module named 'pyaudio'
在anaconda prompt 运行后出现 Invalid input device (no default output device),请问为什么呢?

关于Voice2Face_blendshape2ARkit.xlsx中映射关系有点疑问

从arkit到maya的映射关系来看,以jawOpen为例,对应maya中
'mouth_stretch_c'这个表情,但是从训练数据来看,label中这个维度都是0,这样导致训练出来的模型jawOpen输出均为0,求解?不知道是不是我的使用方式有问题还是你们映射错了。
image
image
谢谢~

关于训练,loss不怎么收敛

我有看到划分数据集的时候,让验证集参与训练,这样是能够收敛,但是当我以1:4划分数据集后,验证集不怎么收敛,想问问训练这块有什么说法吗?
image
@SZNero

invalid ELF header

(tensor1.15) research@research:~/disk1/Audio2Face-main/code/test/AiSpeech$ python zsmeif.py
Traceback (most recent call last):
File "zsmeif.py", line 79, in
from lib.tensorflow.input_wavdata_output_lpc import c_lpc, get_audio_frames
File "/home/research/disk1/Audio2Face-main/code/test/AiSpeech/lib/tensorflow/input_wavdata_output_lpc.py", line 20, in
dll = cdll.LoadLibrary(dll_path_linux)
File "/home/research/disk1/anaconda3/envs/tensor1.15/lib/python3.6/ctypes/init.py", line 426, in LoadLibrary
return self._dlltype(name)
File "/home/research/disk1/anaconda3/envs/tensor1.15/lib/python3.6/ctypes/init.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: /home/research/disk1/Audio2Face-main/code/test/AiSpeech/lib/tensorflow/LPC.dll: invalid ELF header

test运行失败

我将example文件夹放在了AiSpeech文件夹下,执行example/ueExample/FaceGoodLiveLink.exe。并且确保麦克风已正常链接,但是虚拟人物没有任何的反应,人物的嘴唇没有任务动作。
得到以下错误提示:
2022-01-22 17:53:37.267016: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2022-01-22 17:53:37.751174: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED
2022-01-22 17:53:37.751374: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED
2022-01-22 17:53:37.752340: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED
2022-01-22 17:53:37.752457: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_ALLOC_FAILED
2022-01-22 17:53:37.755998: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2022-01-22 17:53:39.230418: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2022-01-22 17:53:39.230887: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2022-01-22 17:53:39.231089: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2022-01-22 17:53:39.231184: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2022-01-22 17:53:39.231703: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2022-01-22 17:53:39.231795: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2022-01-22 17:53:39.231886: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2022-01-22 17:53:39.232217: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
Exception in thread Thread-3:
Traceback (most recent call last):
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
target_list, run_metadata)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d/Conv2D}}]]
[[dense_1/BiasAdd/_11]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d/Conv2D}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Anconda\envs\work\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "D:\Anconda\envs\work\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "D:/Github_Code/Audio2Face/code/test/AiSpeech/zsmeif.py", line 93, in worker
output_data = get_weight(output_lpc)
File "D:\Github_Code\Audio2Face\code\test\AiSpeech\lib\tensorflow\input_lpc_output_weight.py", line 40, in get_weight
self.input_keep_prob_tensor: 1.0
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d/Conv2D (defined at \Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
[[dense_1/BiasAdd/_11]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d/Conv2D (defined at \Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'conv2d/Conv2D':
File "/Github_Code/Audio2Face/code/test/AiSpeech/zsmeif.py", line 83, in
pb_weights_animation = WeightsAnimation(pbfile_path)
File "\Github_Code\Audio2Face\code\test\AiSpeech\lib\tensorflow\input_lpc_output_weight.py", line 24, in init
tf.import_graph_def(self.graph_def, name='')
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\importer.py", line 517, in _import_graph_def_internal
_ProcessNewOps(graph)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\importer.py", line 243, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3561, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3561, in
for c_op in c_api_util.new_tf_operations(self)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3451, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

Exception in thread Thread-2:
Traceback (most recent call last):
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
target_list, run_metadata)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d/Conv2D}}]]
[[dense_1/BiasAdd/_11]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node conv2d/Conv2D}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\Anconda\envs\work\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "D:\Anconda\envs\work\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "D:/Github_Code/Audio2Face/code/test/AiSpeech/zsmeif.py", line 93, in worker
output_data = get_weight(output_lpc)
File "D:\Github_Code\Audio2Face\code\test\AiSpeech\lib\tensorflow\input_lpc_output_weight.py", line 40, in get_weight
self.input_keep_prob_tensor: 1.0
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "D:\Anconda\envs\work\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d/Conv2D (defined at \Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
[[dense_1/BiasAdd/_11]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d/Conv2D (defined at \Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'conv2d/Conv2D':
File "/Github_Code/Audio2Face/code/test/AiSpeech/zsmeif.py", line 83, in
pb_weights_animation = WeightsAnimation(pbfile_path)
File "\Github_Code\Audio2Face\code\test\AiSpeech\lib\tensorflow\input_lpc_output_weight.py", line 24, in init
tf.import_graph_def(self.graph_def, name='')
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\importer.py", line 405, in import_graph_def
producer_op_list=producer_op_list)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\importer.py", line 517, in _import_graph_def_internal
_ProcessNewOps(graph)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\importer.py", line 243, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3561, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3561, in
for c_op in c_api_util.new_tf_operations(self)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3451, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "\Anconda\envs\work\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

test失败

报错:
wait recording
ERROR: Handshake status 429 Too Many Requests
ERROR: Could not create connection: ws://api.tgenie.cn/runtime/v3/recognize?res=comm&productId=914008290&token=bb75195d-f974-4940-94e2-e322a91969f3
麻烦帮忙看看

LPC.dll调用

请问能提供一下Mac版本的吗,Win编译基本没用过,感谢🙏

每次测试r_asr_websocket的text都为空,导致驱动不了

每次测试,打印的r_asr_websocket中的text文本都为空,导致驱动不了 r_asr_websocket: {'result': {'asr': {'noiseStatus': False, 'text': ''}, 'context': {'recordId': '6b159eca-10be-4afa-b73d-1fab8fbf3721'}, 'interrupt': False, 'sdk': {}}, 'status': 200}

193: flag: True
Recording: 0
Recording: 1
Recording: 2
Recording: 3
wait get asr:
line 213 info_print: True
asr time: 0.05204272270202637
line 218 r_asr_websocket: {'result': {'asr': {'noiseStatus': False, 'text': ''}, 'context': {'recordId': '6b159eca-10be-4afa-b73d-1fab8fbf3721'}, 'interrupt': False, 'sdk': {}}, 'status': 200}
line 221 text_asr:
265 end: wait recording

About output shape.

Thank you for open this great repo. The output is 37 shape in zsmeif.pb file, however get 38 shape outputs in val_label_var.npy file.

我获取的blendshape weight都很小,基本上都在10^-5 ~ 10^-3数量级,请问有可能是什么原因呢?

下面是我获取的一组完整的blendshape weight:

0.0,-0.0,0.0,-0.0,0.0,-0.0,0.0,0.0,0.0,6.478356226580217e-05,0.0,0.0,7.74457112129312e-06,1.4016297427588142e-05,0.0003456445410847664,0.0,0.0,-1.0420791113574523e-05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,-0.0,-0.0,1.4069833014218602e-05,0.0,0.0,0.0,0.0,-8.011257159523666e-05,0.0,1.212013557960745e-05,1.2120142855565064e-05,0.0020416416227817535,0.0020416416227817535,0.002010183408856392,0.002010183408856392,0.0,0.0,0.0,0.0,0.0,1.2120121027692221e-05,1.2120119208702818e-05,0.0008284172508865595,0.0008284170180559158,0.0,0.0,5.53658464923501e-05,5.536514800041914e-05,0.0029786918312311172,0.0029780641198158264,0.0,0.0,0.0028428025543689728,0.0007716789841651917,0.0,9.665172547101974e-05,9.665219113230705e-05,0.0012831706553697586,0.001283368095755577,0.0,0.0,0.0,0.0,0.006156831979751587,0.0003454512916505337,0.000345451757311821,0.0009102877229452133,0.0009102877229452133,0.0006938898004591465,0.00055687315762043,1.0965315595967695e-05,1.096533833333524e-05,0.0,-8.066563168540597e-07,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,-0.0,0.0,-0.0,0.0,-0.0,0.0,-0.0,0.0,0.0,-0.0,0.0,-0.0,0.0,-0.0,0.0,-0.0,0.0,-0.0,0.0,0.0

补充信息:语音用的是code\test\AiSpeech\res\xxx_00004.wav;别的语音也试过,也是同样的情况。

关于Audio2Face的pipeline疑问

由于本人还没有跑通整个pipeline,暂时有些疑问。
(1)整个pipeline,就是那张图片展示的(ASR、TTS、FACEGOOD Audio2face),其实是一个语音对话交互系统是吗?
(2)最后产生的blendshape系数,其实是对话模块TTS产生的语音预测出来的系数,和一开始的麦克风录入的声音无关是吧?
(3)如果我想用自己的语音驱动,是不是要使用自己的语音数据,重新训练模型?

question about the implemtation of motion loss

split_y = tf.split(y,2,0) #参数分别为:tensor,拆分数,维度
split_y_ = tf.split(y_,2,0) #参数分别为:tensor,拆分数,维度
# print(10)
y0 = split_y[0]
y1 = split_y[1]
y_0 = split_y_[0]
y_1 = split_y_[1]
loss_M = 2 * tf.reduce_mean(tf.square(y0 - y1 -y_0 + y_1))

Currenly, the motion loss is not caculated on the adjacent frames. tf.split() only split the tensor to parts greedily.

y0 = y[::2, ...]
y1 = y[1::2, ...]
y_0 = y_[::2, ...]
y_1 = y_[1::2, ...]

This array slice with step 2 can generate adjancet frames.

训练模型时提示GPU内存不足&是否能上传训练好的model.ckpt

问题概述

当我运行命令
python step14_train.py --epochs 8 --dataSet dataSet1
最后报错终止程序,控制台提示(完整报错信息放在最后):
(0) Internal: Blas GEMM launch failed : a.shape=(32, 272), b.shape=(272, 150), m=32, n=150, k=272
[[node dense/MatMul (defined at C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]

初步筛查

我去网上查这个报错信息,发现主要都是讲GPU内存不足、GPU被其他进程占用的问题。
经排查,GPU只运行了这个程序,后面按照网上的方法为tf.GPUOptions添加了allow_growth=True,或者将per_process_gpu_memory_fraction调低一些也没用(单独测试或者组合测试都失败了)

电脑配置

系统:win10
GPU:device:GPU:0 with 9830 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3080 Ti, pci bus id: 0000:01:00.0, compute capability: 8.6
Python版本:3.7.11
CUDA版本:cuda_10.0.130_411.31_win10
cudnn版本:cudnn-10.0-windows10-x64-v7.6.5.32

程序运行时内存变化

另外程序运行时内存变化状况如下:
加载cublas64_100.dll之后GPU就直接从437M到了10358M(总共12288M),此时占用率已经到了84.3%了,应该已经突破per_process_gpu_memory_fraction=0.8的限制了
加载cudnn64_7.dll之后GPU到了10424M(总共12288M)——最大达到了10645M
之后程序就崩了

完整报错信息

2022-03-04 09:34:16.145502: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
WARNING:tensorflow:From step14_train.py:30: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.

WARNING:tensorflow:From step14_train.py:37: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

(2200, 32, 64, 1)
(2200, 90000)
(1000, 32, 64, 1)
(1000, 90000)
WARNING:tensorflow:From step14_train.py:86: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

WARNING:tensorflow:From step14_train.py:86: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From step14_train.py:90: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From D:\Ningxin\Coding\Voice2Face-main\code\train\model_paper.py:21: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras.layers.Conv2D instead.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\layers\convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
Please use layer.__call__ method instead.
WARNING:tensorflow:From D:\Ningxin\Coding\Voice2Face-main\code\train\model_paper.py:49: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.flatten instead.
WARNING:tensorflow:From D:\Ningxin\Coding\Voice2Face-main\code\train\model_paper.py:51: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.Dense instead.
WARNING:tensorflow:From D:\Ningxin\Coding\Voice2Face-main\code\train\model_paper.py:52: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use rate instead of keep_prob. Rate should be set to rate = 1 - keep_prob.
WARNING:tensorflow:From step14_train.py:98: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.

WARNING:tensorflow:From step14_train.py:103: The name tf.train.exponential_decay is deprecated. Please use tf.compat.v1.train.exponential_decay instead.

WARNING:tensorflow:From step14_train.py:105: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.

WARNING:tensorflow:From step14_train.py:105: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From step14_train.py:106: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.

WARNING:tensorflow:From step14_train.py:108: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.

WARNING:tensorflow:From step14_train.py:111: The name tf.GPUOptions is deprecated. Please use tf.compat.v1.GPUOptions instead.

WARNING:tensorflow:From step14_train.py:122: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

WARNING:tensorflow:From step14_train.py:122: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

2022-03-04 09:34:25.346984: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2022-03-04 09:34:25.351173: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2022-03-04 09:34:25.389703: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: NVIDIA GeForce RTX 3080 Ti major: 8 minor: 6 memoryClockRate(GHz): 1.71
pciBusID: 0000:01:00.0
2022-03-04 09:34:25.389860: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2022-03-04 09:34:25.469703: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2022-03-04 09:34:25.566662: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll
2022-03-04 09:34:25.589486: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll
2022-03-04 09:34:25.662159: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll
2022-03-04 09:34:25.713455: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll
2022-03-04 09:34:25.799900: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2022-03-04 09:34:25.800359: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2022-03-04 09:37:34.818241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2022-03-04 09:37:34.818380: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0
2022-03-04 09:37:34.818619: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N
2022-03-04 09:37:34.819527: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9830 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3080 Ti, pci bus id: 0000:01:00.0, compute capability: 8.6)
WARNING:tensorflow:From step14_train.py:126: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

WARNING:tensorflow:From step14_train.py:127: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

2022-03-04 09:37:35.857089: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2022-03-04 09:38:32.908660: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2022-03-04 09:49:39.479241: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows
Relying on driver to perform ptx compilation. This message will be only logged once.
2022-03-04 09:49:39.610678: E tensorflow/stream_executor/cuda/cuda_blas.cc:428] failed to run cuBLAS routine: CUBLAS_STATUS_EXECUTION_FAILED
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\client\session.py", line 1365, in _do_call
return fn(*args)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\client\session.py", line 1350, in _run_fn
target_list, run_metadata)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\client\session.py", line 1443, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(32, 272), b.shape=(272, 150), m=32, n=150, k=272
[[{{node dense/MatMul}}]]
[[Adam/update/_38]]
(1) Internal: Blas GEMM launch failed : a.shape=(32, 272), b.shape=(272, 150), m=32, n=150, k=272
[[{{node dense/MatMul}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "step14_train.py", line 190, in
train()
File "step14_train.py", line 136, in train
train_op = sess.run(train_step, feed_dict={data: train_data, label: train_label, keep_pro: 0.5})
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
(0) Internal: Blas GEMM launch failed : a.shape=(32, 272), b.shape=(272, 150), m=32, n=150, k=272
[[node dense/MatMul (defined at C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
[[Adam/update/_38]]
(1) Internal: Blas GEMM launch failed : a.shape=(32, 272), b.shape=(272, 150), m=32, n=150, k=272
[[node dense/MatMul (defined at C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\framework\ops.py:1748) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'dense/MatMul':
File "step14_train.py", line 190, in
train()
File "step14_train.py", line 95, in train
output, emotion_input = net(data,output_size,keep_pro)
File "D:\Ningxin\Coding\Voice2Face-main\code\train\model_paper.py", line 51, in net
fc1 = tf.layers.dense(inputs=flat, units=150 , activation=None) #activation=None表示使用线性激活器
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\layers\core.py", line 187, in dense
return layer.apply(inputs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 1700, in apply
return self.call(inputs, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\layers\base.py", line 548, in call
outputs = super(Layer, self).call(inputs, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 854, in call
outputs = call_fn(cast_inputs, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 234, in wrapper
return converted_call(f, options, args, kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 439, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\autograph\impl\api.py", line 330, in _call_unconverted
return f(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\keras\layers\core.py", line 1050, in call
outputs = gen_math_ops.mat_mul(inputs, self.kernel)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 6136, in mat_mul
name=name)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\py37_tensorflow\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in init
self._traceback = tf_stack.extract_stack()

关于更换说话人声的问题

您提供的预训练好的模型应该是基于zsmeif人声的,现在我们想换成思必驰上的男声,请问要如何准备训练数据呢?以我的理解avatary工具可以从真实人的video中生成数据,但是声音是从思必驰上合成的,怎么去和video里的每一帧去匹配呢? 或者是我对整个pipeline哪里理解错了呢? 谢谢!

How to create animation from video in maya?

I can record my voice and video, but how can I get the corresponding blendshapes in Maya?Did you use some face capture tool? If you used, can you introduce it here? If not, how you create the blendshape?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.