facebookresearch / audio2photoreal Goto Github PK
View Code? Open in Web Editor NEWCode and dataset for photorealistic Codec Avatars driven from audio
License: Other
Code and dataset for photorealistic Codec Avatars driven from audio
License: Other
Hi! Thank you very much for the amazing work!
Could you explain in more detail about the data acquisition process? For example, the number of cameras required for the capture domes, how cameras were placed, etc.
The other question is how to process the raw audio to get .npy files like your dataset. And, does the data processing step just require the frontal view of the video or does it require the multiview?
Thank you!
Hi, when I was trying to train the model (train.train_diffusion.py
)with multiple GPUs (tested on V100s and 2080Tis), I ran into the error below:
DDP RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
My training command is:
python -m train.train_diffusion --save_dir ./test_log/1 --data_root ./dataset/GQS883/ --batch_size 2 --dataset social --data_format face --layer 8 --heads 8 --timestep_respacing "" --max_seq_length 600
Do you have any idea? Many thanks!
Hello,
Firstly, I want to extend my sincere thanks for the great work on this repository.
I have a question regarding the functionality: Is the audio-to-face feature designed to work in real-time?
Any guide on how to install and test on windows?
Hello,
Thanks for this amazing contribution. It seems that the models and pre-requisites are currently not available for download at http://audio2photoreal_models.berkeleyvision.org/.
Therefore installation scripts are currently not functional.
Thanks
Amazing work, any attempt to driven Metahuman? It would be much more even vivid if can directly used into Metahuman
Thanks for your excellent work!
I find it says you adopt classifier-free guidance
policy to train the diffusion module in the paper, as it shows in the following picture.
However, in your codes, I find the cond_mode
parameter is set when FinLMTransformer
model is initialized, and won't change in the TrainLoop
. Moreover, the forward
function of the FiLMtranformer
only uses the cond_mode
of the model instance, doesn't use the condition signal in the y
.
So, I wonder whether the classifier-free guidance
is used in the training process? Looking forward to your reply!
Hi,
Thank you for releasing this code. Do you have any plans to release it under an open-sourced license (ie Apache, MIT, ISC)?
Thank you!
I would like to create a scenario with 3 avatars talking with each other. The avatars would all be voice driven, but I'd like the avatars that are not speaking to be looking at the one that is speaking. Ideally there would be a way to specify a head azimuth and elevation offset for each avatar, which can be programmatically controlled. Is this possible?
Thank you, for your awesome work!! It's really cool.
Implementing your awesome project, I have few questions....
I want to visualize 2 different avatars in the same scene, just like the introduction page.
However, when I ran the code by following README, only 1 avatar was displayed, in 2 different angles.
Can you give me the code which enables visualizing 2 different avatars in the same scene?
I want to display just like the first photo, but I cannot figure out how to.
First, I would like to thank you for the incredible project and for making it public.
My issue is after recording the audio and hitting the submit button, I run into the Exception in ASGI application
error. Below is the full log:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\fastapi\applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\applications.py", line 116, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
raise exc
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\middleware\cors.py", line 83, in __call__
await self.app(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app
raise exc
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app
await app(scope, receive, sender)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\routing.py", line 746, in __call__
await route.handle(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\routing.py", line 288, in handle
await self.app(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\routing.py", line 75, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app
raise exc
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app
await app(scope, receive, sender)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\routing.py", line 73, in app
await response(scope, receive, send)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\responses.py", line 340, in __call__
await send(
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\_exception_handler.py", line 41, in sender
await send(message)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\_exception_handler.py", line 41, in sender
await send(message)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\starlette\middleware\errors.py", line 161, in _send
await send(message)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 512, in send
output = self.conn.send(event=h11.EndOfMessage())
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\h11\_connection.py", line 512, in send
data_list = self.send_with_data_passthrough(event)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\h11\_connection.py", line 545, in send_with_data_passthrough
writer(event, data_list.append)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\h11\_writers.py", line 67, in __call__
self.send_eom(event.headers, write)
File "C:\Users\musta\anaconda3\envs\a2p_env\lib\site-packages\h11\_writers.py", line 96, in send_eom
raise LocalProtocolError("Too little data for declared Content-Length")
h11._util.LocalProtocolError: Too little data for declared Content-Length
Device is Windows 11
Awesome project! Where could I set the camera params for rendering novel view?
What specifications of GPU and CPU are needed?
can you show me proper instructions and i really can't make an avatar properly and it doesn't have proper instructions in abstract and github but can you help me with the instructions in the video.
Hi, thanks for sharing the excellent work. I am trying to run this repo, but fairseq
cannot be correctly imported because of the low version of GLIBC
. I cannot update my environment. Hene, I wonder whether there are other methods without the use of fairseq
. Thanks in advance.
Is there a way to run the demo in a laptop without GPU?. When I execute this in my MacOS BigSur 11.7.10., an error is raised:
python -m demo.demo
running on... cpu
File ".... anaconda3/envs/audio2photoreal/lib/python3.9/site-packages/torch/cuda/init.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Hi im relatively new to ml and want to try training an audio to co gesture model that is more suitable for game engines like unreal engine 5
my question is if I input a different type of data to the body VQ VAE model where instead of joint angles i input join positions in 3d space
and then input the vector quantized codes from that to the body transformer to essentially train a similar mode that outputs joint position instead of joint angles
I am wondering how practical is this and would it work
if so what considerations would i have to make
any help would be greatly appreciated , thanks!
Hello question i was trying to make the conversational avatar many times and it was very hard for me and can you send me a video on how to make the conversational avatar in audio2photoreal.
soundfile.LibsndfileError: Error opening '/tmp/audio_tmp_sample0.wav': System error.
when I suppored my audio in windows.
https://www.geeksforgeeks.org/how-to-downgrade-python-version-in-colab/
tl;dr
step1: !sudo apt-get install python3.9
step2: !sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1
reap the benefits: !python --version
Thanks for sharing this projetct!
Does the 256-dimensional expression vector have any specific meaning? Similar to iPhone arkit52, https://arkit-face-blendshapes.com/.
Hello question how to run the local url in google colab but it’s not working and can you help me how to use it and how it works
Hi, Just pondering how the code could be adapted to introduce a background colour, for something like chroma-keying the result (greenscreen).
Thanks!
This is when running the demo
Hi,
Thank you for sharing your work, for my research I am interested in the video data too. The paper mentions the video will be released too, but I couldn't find it in the dataset yet. Could you point me to where I could find it?
Best regards,
Thomas
(a2p_env) C:\Users\kit\audio2photoreal>python -m train.train_diffusion --save_dir checkpoints/diffusion/c1_face_test --data_root ./dataset/RLW104/ --batch_size 4 --dataset social --data_format face --layers 8 --heads 8 --timestep_respacing '' --max_seq_length 600
using 0 gpus
Traceback (most recent call last):
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 83, in
main(rank=0, world_size=1)
File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 36, in main
raise FileExistsError("save_dir [{}] already exists.".format(args.save_dir))
FileExistsError: save_dir [checkpoints/diffusion/c1_face_test] already exists.
(a2p_env) C:\Users\kit\audio2photoreal>python -m train.train_diffusion --save_dir checkpoints/diffusion/c1_face_test --data_root ./dataset/RLW104/ --batch_size 4 --dataset social --data_format face --layers 8 --heads 8 --timestep_respacing '' --max_seq_length 600
using 0 gpus
creating data loader...
[dataset.py] training face only model
['[dataset.py] sequences of 600']
C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\numpy\core\fromnumeric.py:43: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
result = getattr(asarray(obj), method)(*args, **kwds)
C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\numpy\core\fromnumeric.py:43: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__
, __array_interface__
or __array_struct__
); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj)
. To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object)
.
result = getattr(asarray(obj), method)(*args, **kwds)
[dataset.py] loading from... ./dataset/RLW104/data_stats.pth
[dataset.py] train | 18 sequences ((8989, 256)) | total len 160523
creating logger...
creating model and diffusion...
Traceback (most recent call last):
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 83, in
main(rank=0, world_size=1)
File "C:\Users\kit\audio2photoreal\train\train_diffusion.py", line 54, in main
model, diffusion = create_model_and_diffusion(args, split_type="train")
File "C:\Users\kit\audio2photoreal\utils\model_util.py", line 42, in create_model_and_diffusion
model = FiLMTransformer(**get_model_args(args, split_type=split_type)).to(
File "C:\Users\kit\audio2photoreal\model\diffusion.py", line 157, in init
self.setup_lip_models()
File "C:\Users\kit\audio2photoreal\model\diffusion.py", line 276, in setup_lip_models
cp = torch.load(cp_path, map_location=torch.device(self.device))
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1172, in _load
result = unpickler.load()
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
wrap_storage=restore_location(storage, location),
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 1086, in restore_location
return default_restore_location(storage, str(map_location))
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 217, in default_restore_location
result = fn(storage, location)
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 182, in _cuda_deserialize
device = validate_cuda_device(location)
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torch\serialization.py", line 166, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Hello !
Thank you very much for this open source work .
I would like to know which model you used to extract the 104 joint angles for the body skeleton.
And is there is a way to view the skeletons only ?
Thank you,
This looks like an FFMPEG issue, but I have the latest FFMPEG, so not sure what's going on
Traceback (most recent call last):
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/mediapy/init.py", line 1749, in write_video
writer.add_image(image)
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/mediapy/init.py", line 1653, in add_image
if stdin.write(data) != len(data):
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/gradio/queueing.py", line 489, in call_prediction
output = await route_utils.call_process_api(
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/gradio/blocks.py", line 1561, in process_api
result = await self.call_function(
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/gradio/blocks.py", line 1179, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/gradio/utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "/root/audio2photoreal/demo/demo.py", line 226, in audio_to_avatar
gradio_model.body_renderer.render_full_video(
File "/root/audio2photoreal/visualize/render_codes.py", line 153, in render_full_video
self._write_video_stream(
File "/root/audio2photoreal/visualize/render_codes.py", line 95, in _write_video_stream
mediapy.write_video(save_name, out, fps=30)
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/mediapy/init.py", line 1749, in write_video
writer.add_image(image)
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/mediapy/init.py", line 1614, in exit
self.close()
File "/root/anaconda3/envs/a2p_env/lib/python3.9/site-packages/mediapy/init.py", line 1671, in close
raise RuntimeError(f"Error writing '{self.path}': {s}")
RuntimeError: Error writing '/tmp/pred_tmp_sample0.mp4': Unrecognized option 'qp'.
Error splitting the argument list: Option not found
Hi, what nice work with such a wonderful result, and thanks for being open-sourced.
However, I ran into this problem while trying to read and learn the code: In the lips regressor module, an Encoder-Decoder structure empowered by the pre-trained wav2vec2 is designed. There are some things that confuse me:
causal
is set to False
as default. Since proposed in FaceFormer, a causal attention mask has been adopted in many following works to add inductive bias. So it kind of confused me why you chose not to, even though the parameter causal
is programmed.audio2photoreal/model/diffusion.py
Line 274 in 3a94699
As a result, the visualizations of the 338 vertices sequence are not looking good. Here are some examples (30fps) I saved when running python -m demo.demo
, where the save-to-numpy command is inserted after
audio2photoreal/model/diffusion.py
Line 309 in 3a94699
I also tried to set causal = True
, and the result is shown below.
I also checked the input audios recorded by my microphone (about 5-7s), and all of the inputs are spoken in English.
Please help me out if you have any idea, thanks in advance.
I installed it as per the readme,
but running python demo/demo.py report an error:
Here's the lib version
diffusion 6.10.1
diffusion-core 0.0.40
the requirements.txt didnot give the exact version number, Ican't check the version.
Or someone provide a requirements.txt with version numer?
Thanks.
What GPUs and how many of them are used for training/inference?
What is the total training and inference time?
Thanks
Please provide a version that maps to SMPL or Unity Mecanim biped rig animations. Thank you.
For now , it's only have 4 pernson id to use
I want to know how can I build a person design it by my self ,thanks if anyone give me a response
When I wanted to reproduce this paper, I found that the data was unavailable at https://github.com/facebookresearch/audio2photoreal/releases/download/v1.0/.zip or sh scripts/download_allmodels .sh
Hello,
First of all thank you for your work !
I am trying to run the code locally (demo and trying to do the training ) and i wanted to view the dataset to understand better how the model works but in the dataset folder there is only the data_stats.pth file. Where can i find the audio files, the body pose etc . As mentionned in the README.
Thank you for your time.
A.B.H
Would it be possible to release a model that can calculate expression codes from faces? This would be very beneficial for training on our own data.
What's more, people speaking different languages have different habits of poses when speaking.
Though you have closed same issue #53 , it seems not solved .
seems that the models and pre-requisites are currently not available for download at http://audio2photoreal_models.berkeleyvision.org/.
Hello, how can I change the position of the camera or model in the scene? The demo shows the same model twice from two different perspectives, is that done by duplicating model or is it done with having two cameras?
Amazing work! Could you provide the code for evaluating the model?
(a2p_env) C:\Users\kit\audio2photoreal>python -m train.train_vq --out_dir checkpoints/vq/c1_vq_test --data_root ./dataset/PXB184/ --lr 1e-3 --code_dim 1024 --output_emb_width 64 --depth 4 --dataname social --loss_vel 0.0 --data_format pose --batch_size 4 --add_frame_cond 1 --max_seq_length 600
C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torchaudio\backend\utils.py:74: UserWarning: No audio backend is available.
warnings.warn("No audio backend is available.")
2024-01-10 09:11:43,820 INFO {
"add_frame_cond": 1.0,
"batch_size": 4,
"code_dim": 1024,
"commit": 0.02,
"data_format": "pose",
"data_root": "./dataset/PXB184/",
"dataname": "social",
"dataset": "social",
"depth": 4,
"eval_iter": 1000,
"gamma": 0.05,
"loss_vel": 0.0,
"lr": 0.001,
"lr_scheduler": [
300000
],
"max_seq_length": 600,
"out_dir": "checkpoints/vq/c1_vq_test",
"output_emb_width": 64,
"print_iter": 200,
"resume_pth": null,
"seed": 123,
"total_iter": 300000,
"warm_up_iter": 1000,
"weight_decay": 0.0
}
Traceback (most recent call last):
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\kit\audio2photoreal\train\train_vq.py", line 374, in
main(args)
File "C:\Users\kit\audio2photoreal\train\train_vq.py", line 321, in main
train_loader_iter, val_loader, skip_step = _load_data_info(args, logger)
File "C:\Users\kit\audio2photoreal\train\train_vq.py", line 275, in _load_data_info
data_dict = load_local_data(args.data_root, audio_per_frame=1600)
File "C:\Users\kit\audio2photoreal\data_loaders\get_data.py", line 125, in load_local_data
return _load_pose_data(
File "C:\Users\kit\audio2photoreal\data_loaders\get_data.py", line 79, in _load_pose_data
curr_audio, _ = torchaudio.load(
File "C:\Users\kit\miniconda3\envs\a2p_env\lib\site-packages\torchaudio\backend\no_backend.py", line 16, in load
raise RuntimeError("No audio I/O backend is available.")
RuntimeError: No audio I/O backend is available.
ffmpeg -y -i /tmp/pred_tmp_sample0.mp4 -i /tmp/audio_tmp_sample0.wav -c:v copy -map 0:v:0 -map 1:a:0 -c:a aac -b:a 192k -pix_fmt yuva420p /tmp/sample0_pred.mp4 ----------------------------------------------------------------------------------------------------
2024/01/06 04:26:54.453048 cmd_run.go:1055: WARNING: cannot start document portal: dial unix /run/user/0/bus: connect: no such file or directory
ffmpeg version n4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-cuda --enable-cuda-sdk --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libnpp --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/tmp/pred_tmp_sample0.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.45.100
Duration: 00:00:04.00, start: 0.000000, bitrate: 1361 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 2668x2048, 1357 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
/tmp/audio_tmp_sample0.wav: No such file or directory
Thank you for your excellent work. I noticed that the evaluation code provided only covers poses and does not include evaluation for lip reconstructions. Could you please provide the evaluation code for lip reconstructions as well?
Firstly, thank you for the code and sample models! Really helps push the research in this field to new heights.
Based on https://arxiv.org/pdf/1808.00362.pdf or https://arxiv.org/pdf/2105.10441.pdf, seems like the avatar renderer can take a view vector/condition to change the view of the rendered avatar. Is there a way to parameterize this so that we can correct the head position? I'm assuming this is now hard coded somewhere for this sample set to render a fixed view angle.
Also, can we render multiple avatars into the same video? If yes, which object/parameter controls the placement and camera location?
Traceback (most recent call last):
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/gradio/queueing.py", line 489, in call_prediction
output = await route_utils.call_process_api(
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/gradio/route_utils.py", line 232, in call_process_api
output = await app.get_blocks().process_api(
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/gradio/blocks.py", line 1561, in process_api
result = await self.call_function(
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/gradio/blocks.py", line 1179, in call_function
prediction = await anyio.to_thread.run_sync(
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/gradio/utils.py", line 678, in wrapper
response = f(*args, **kwargs)
File "/home/jupyter/audio2photoreal/demo/demo.py", line 220, in audio_to_avatar
face_results, pose_results, audio = generate_results(audio, num_repetitions, top_p)
File "/home/jupyter/audio2photoreal/demo/demo.py", line 188, in generate_results
gradio_model.generate_sequences(
File "/home/jupyter/audio2photoreal/demo/demo.py", line 148, in generate_sequences
sample = self._run_single_diffusion(
File "/home/jupyter/audio2photoreal/demo/demo.py", line 100, in _run_single_diffusion
sample = sample_fn(
File "/home/jupyter/audio2photoreal/diffusion/gaussian_diffusion.py", line 845, in ddim_sample_loop
for sample in self.ddim_sample_loop_progressive(
File "/home/jupyter/audio2photoreal/diffusion/gaussian_diffusion.py", line 925, in ddim_sample_loop_progressive
out = sample_fn(
File "/home/jupyter/audio2photoreal/diffusion/gaussian_diffusion.py", line 683, in ddim_sample
out_orig = self.p_mean_variance(
File "/home/jupyter/audio2photoreal/diffusion/respace.py", line 105, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "/home/jupyter/audio2photoreal/diffusion/gaussian_diffusion.py", line 287, in p_mean_variance
model_output = model(x, self._scale_timesteps(t), **model_kwargs)
File "/home/jupyter/audio2photoreal/diffusion/respace.py", line 145, in call
return self.model(x, new_ts, **kwargs)
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jupyter/audio2photoreal/model/cfg_sampler.py", line 35, in forward
out = self.model(x, timesteps, y)
File "/opt/conda/envs/a2p_env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jupyter/audio2photoreal/model/diffusion.py", line 388, in forward
cond_tokens = torch.where(
RuntimeError: The size of tensor a (7998) must match the size of tensor b (1998) at non-singleton dimension 1
No such file or directory: 'assets/render_defaults_PXB184.pth'
1
How to train a new model from scratch
How to generate the dataset required for training a new model
Please provide how the corresponding wav and npy files in the dateset directory are generated
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.