Code Monkey home page Code Monkey logo

flametimewarpml's People

Contributors

andrewpatskanw avatar daveoy avatar jfpanisset avatar talosh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flametimewarpml's Issues

failed because of an error of type 'Internal C++ object (PySide6.QtGui.QScreen) already deleted.

0.4.5dev003 on RL8.7 Flame 2025.0.1

Jun 17 16:26:54 : Create Fluidmorph Transition execute callback [<function get_media_panel_custom_ui_actions.<locals>.fluidmorph at 0x7fb828a06340>((<flame.PyClip object at 0x7fb82882c0b0>, <flame.PyClip object at 0x7fb82882c200>),)] failed because of an error of type 'Internal C++ object (PySide6.QtGui.QScreen) already deleted.'
Jun 17 16:27:39 : Fine-tune model on selected clips execute callback [<function get_media_panel_custom_ui_actions.<locals>.finetune at 0x7fb8288c3240>((<flame.PyClip object at 0x7fb82882c0b0>,),)] failed because of an error of type 'Internal C++ object (PySide6.QtGui.QScreen) already deleted.'
Jun 17 16:28:38 : Timewarp from Flame's TW effect execute callback [<function get_media_panel_custom_ui_actions.<locals>.timewarp at 0x7fb8288c2a20>((<flame.PySequence object at 0x7fa164f5c7b0>,),)] failed because of an error of type 'Internal C++ object (PySide6.QtGui.QScreen) already deleted.'

Error:CUDA error: no kernal image is available for executio

First attempt running v0.5.0 dev 005. No picture preview and renders without picture too. (error below)

flameTimewarpML:Error:CUDA error: no kernal image is available for execution on the deviceCUDA kernal errors might be asynchronously reported as some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1

Setup: Flame 2024 on rocky8.5, A6000 GPU (driver 525.89.02, cuda 12.0)

Have updated PyTorch with no joy.

Shell log attached
Uploading flame2024_flame07_shell.log…

Support for RTX 30 series

The bundle works really well on my old 2080 but when I switch to my 3080 I run into cuda errors.

Is it possible to compile this to work with the newer cards?

Thanks!

0.5.0 dev005 UI - how to?

Installed the latest release on an M1 mac and it's opening and acting like it's rendering. However, the clips just say "Pending Render" on them, playing them it looks like a flicker between an alpha and single frames of the clip, and nothing is showing up in the set export folders. Attempting to downgrade to 0.5.0 dev004 is unsuccessful- Flame 2024 just doesn't recognize the plugin as being present at all.

Also, can't figure out how to get the new UI to do anything other than speed changes. Are there instructions or a video tutorial available on how to use it?
Screen Shot 2024-04-19 at 8 31 20 AM
Screen Shot 2024-04-19 at 8 36 20 AM

Manual installation without miniconda

Hi! Is there a way to get rid of the miniconda setup? The python dependencies seem to be defined with the requirement.txt file, so it would theoretically be possible to manage the environment differently. You are using the FLAMETWML_MINICONDA to activate the conda environment from within the code, but could it be replaced by a subprocess.call instead? (or a thead).

import threading
import subprocess

class Worker(threading.Thread):

    def run(self):
         command_wrapper = os.path.join(self.framework.bundle_path, 'command_wrapper.py')

         subprocess.Popen(
             'python {} {}'.format(command_wrapper, lockfile_path),
             stdout=subprocess.PIPE,
             stderr=subprocess.PIPE
        )

worker = Worker()
worker.start()

Installed but no processing.

Hi again.

I seem to have everything centrally installed properly now, no processing seems to be happening. It exports the shots, then nothing.

[flameTimewarpML] creating folders: /var/tmp/CF5000_NR01-RSZ_Result_TWML4_2021FEB22_1340_8AA/source
[flameTimewarpML] Executing command: echo "/var/tmp/CF5000_NR01-RSZ_Result_TWML4_2021FEB22_1340_8AA">/vol/pipeline/flameTimewarpML/locks/83B1920FC647E560A256A0F031BD225510AA3C78.lock
[flameTimewarpML] Executing command: konsole -e /bin/bash -c 'eval "$(/vol/pipeline/miniconda/bin/conda shell.bash hook)"; conda activate; cd /vol/pipeline/flameTimewarpML/; echo "Received 1 clip to process, press Ctrl+C to cancel"; trap exit SIGINT SIGTERM; python3 /vol/pipeline/flameTimewarpML/inference_sequence.py --input /var/tmp/CF5000_NR01-RSZ_Result_TWML4_2021FEB22_1340_8AA/source --output /var/tmp/CF5000_NR01-RSZ_Result_TWML4_2021FEB22_1340_8AA --model /vol/pipeline/flameTimewarpML/trained_models/default/v2.0.model --exp=2; '

Python 3.8.5 requirement.

Hello Talosh,

In your central install documentation, you list Python 3.8.5 as a requirement. Even in the CentOS 8.2 the most recent version is 3.8.3. Given that all flames are on CentOS 7.6 and won't be moving to CentOS 8.3 or newer in a very long time, this requirement is a bit hard to meet. Would you be able to either provide some additional documentation on getting Python 3.8.5 installed in a "ADSK safe" way or get things working with a less stringent Python version requirement.

Thanks,
Alan

pip requirements failure.

[alan@ws104 flameTimewarpML]$ pip3 install -r requirements.txt 
Collecting numpy (from -r requirements.txt (line 1))
  Using cached https://files.pythonhosted.org/packages/45/b2/6c7545bb7a38754d63048c7696804a0d947328125d81bf12beaa692c3ae3/numpy-1.19.5-cp36-cp36m-manylinux1_x86_64.whl
Collecting scipy (from -r requirements.txt (line 2))
  Using cached https://files.pythonhosted.org/packages/c8/89/63171228d5ced148f5ced50305c89e8576ffc695a90b58fe5bb602b910c2/scipy-1.5.4-cp36-cp36m-manylinux1_x86_64.whl
Collecting tqdm (from -r requirements.txt (line 3))
  Using cached https://files.pythonhosted.org/packages/d9/13/f3f815bb73804a8af9cfbb6f084821c037109108885f46131045e8cf044e/tqdm-4.57.0-py2.py3-none-any.whl
Collecting torch (from -r requirements.txt (line 4))
  Using cached https://files.pythonhosted.org/packages/90/4f/acf48b3a18a8f9223c6616647f0a011a5713a985336088d7c76f3a211374/torch-1.7.1-cp36-cp36m-manylinux1_x86_64.whl
Collecting opencv-python (from -r requirements.txt (line 5))
  Using cached https://files.pythonhosted.org/packages/bb/08/9dbc183a3ac6baa95fabf749ddb531bd26256edfff5b6c2195eca26258e9/opencv-python-4.5.1.48.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-gvvshdzt/opencv-python/setup.py", line 10, in <module>
        import skbuild
    ModuleNotFoundError: No module named 'skbuild'
    
    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-gvvshdzt/opencv-python/

SuperResolution model?

Hi Talosh,

Not really a bug, just didn't know a better place to post this. I'm wondering if you could use the framework you have built here, to swap in an ML super resolution model for upscaling?

Just a thought,
Alan

v0.4.5 dev 001 - frames are doubling up.

I tried running a 50% Timewarp on a clip with Mix selected. TWML render a clip with a frame hold on each frame. 1-1, 2-2, 3-3, etc.
I then converted the Timewarp to Motion, pre-rendered the clip, then ran TWML using "flownet4". This was better, but it randomly added double frames too. It also did not start at frame 1. As mu frame counter burn-in showed. It started at frame 2. It went from 2, 2.5, 2, 2.5, 3, 3.5, 4, 4.5 ,5 ,6, 6, 6.5, 7, 8, 8, etc.

I used both models, "flownet4-lite" was worse, almost every frame was doubled.

macOS 13.6.4 (22G513)
Flame 2024.2.1
original frame rate of the clip is 23.976

v0.4.3 on mac flame 2024 -- "Free RAM" is lower than expected

Hi Andriy,

When using flameTimewarpML v0.4.3 on a mac flame_2024.2.1, we noticed that the "Free RAM" reported was low (27G). This is our first time running TWML on a mac (instead of linux). We thought the available ram would be higher (no other apps running). Is this expected behavior?

The mac is an M1 Ultra with 128G of RAM running macOS 14.4.1 (sonoma).

When I logged in as another user and did a psutil.virtual_memory().available, I got ~116G. When I did the same commmand as the flame user in the conda env, I got ~29G. psutil.virtual_memory() shows the total was the same between the two, but everything else was different. However, when I did a vm_stat all numbers were relatively the same.

Also, not very familiar with conda env but I think the test is valid. I just tested in a shell (flame was not running). Please LMK if we're doing anything incorrectly.

thanks,
Janice

flameTimewarpML output:

TWML output:
initializing Timewarp ML...
Trained model loaded: /Users/elk/Documents/flameTimewarpML/bundle/trained_models/default/v2.4.model
---
Free RAM: 27.1 Gb available
Image size: 4096 x 2160
Peak memory usage estimation: 17.6 Gb per CPU thread
Limiting therads to 2 CPU worker threads (of 20 available) to prevent RAM from overflow

Logged in as same user (with conda env):

(base) flame07:~ elk$ pip list | grep psutil
psutil                 5.8.0

(base) flame07:~ elk$ python3
Python 3.8.5 (default, Sep  4 2020, 02:22:02)
[Clang 10.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import psutil
>>> import os
>>> psutil.virtual_memory()
svmem(total=137438953472, available=31214690304, percent=77.3, used=2921762816, free=26390863872, active=2316161024, inactive=3009757184, wired=605601792)
>>> 
>>> os.system('/usr/bin/vm_stat')
Mach Virtual Memory Statistics: (page size of 16384 bytes)
Pages free:                             6442438.
Pages active:                            566214.
Pages inactive:                          734806.
Pages speculative:                       442898.
Pages throttled:                              0.
Pages wired down:                        147851.
Pages purgeable:                           3726.
"Translation faults":                1242091097.
Pages copy-on-write:                   24565273.
Pages zero filled:                    774014453.
Pages reactivated:                       496959.
Pages purged:                           2599716.
File-backed pages:                      1441609.
Anonymous pages:                         302309.
Pages stored in compressor:                   0.
Pages occupied by compressor:                 0.
Decompressions:                               0.
Compressions:                                 0.
Pageins:                               16269383.
Pageouts:                                     0.
Swapins:                                      0.
Swapouts:                                     0.
0
>>>

Another user logged in:

(env) engineer@flame07 jtest % pip list | grep psutil
psutil  5.9.8

(env) engineer@flame07 jtest % python3
Python 3.12.3 (main, Apr  9 2024, 08:09:14)
[Clang 15.0.0 (clang-1500.3.9.4)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import psutil
>>> import os
>>> psutil.virtual_memory()
svmem(total=137438953472, available=124859547648, percent=9.2, used=11686854656, free=105564241920, active=9264644096, inactive=12039028736, wired=2422210560)
>>> 
>>> os.system('/usr/bin/vm_stat')
Mach Virtual Memory Statistics: (page size of 16384 bytes)
Pages free:                             6443054.
Pages active:                            565044.
Pages inactive:                          734804.
Pages speculative:                       442890.
Pages throttled:                              0.
Pages wired down:                        148329.
Pages purgeable:                           3379.
"Translation faults":                1242087099.
Pages copy-on-write:                   24565099.
Pages zero filled:                    774013096.
Pages reactivated:                       496959.
Pages purged:                           2599716.
File-backed pages:                      1441600.
Anonymous pages:                         301138.
Pages stored in compressor:                   0.
Pages occupied by compressor:                 0.
Decompressions:                               0.
Compressions:                                 0.
Pageins:                               16269382.
Pageouts:                                     0.
Swapins:                                      0.
Swapouts:                                     0.
0
>>>

Better memory estimation and thread limiting in CPU mode.

Hello,

I'm running a Fill / Remove duplicate frame pass on a 2970x3200 16fp clip. I've found that on this particular shot, CPU mode is producing much better results.

FlameTimewarpML is stating:
Free RAM: 99.3 Gb available
Peak memory usage estimation: 22.8 Gb per thread. Limiting to 4 of 48 available threads.

But the machine consistently has around 80Gb free during the run. So it seems the estimation is overly agreesive, leaving lots of GPU and RAM un-used. It is taking a very long time to run, and using those extra idle cores would be nice.

Thanks.

32bit float.

Hi Talosh...

If we have 32bit fp source footage, it comes back as 16bit fp. Would be great to keep that bit depth.

Thanks,
Alan

invalid literal for int()

I'm trying to process a clip and getting this error:

"invalid literal for int() with base 10: '1.5' in <class 'ValueError'>"
['Traceback (most recent call last):\n',
'  File "/opt/flametimewarp/bundle/inference_flame_tw.py", line 520, in '
'<module>\n'
'    frame_value_map = bake_flame_tw_setup(args.setup, args.record_in, '
'args.record_out)\n',
'  File "/opt/flametimewarp/bundle/inference_flame_tw.py", line 309, in '
'bake_flame_tw_setup\n'
"    tw_speed[int(index)] = {'frame': int(frame), 'value': float(value)}\n",
"ValueError: invalid literal for int() with base 10: '1.5'\n"]
Traceback (most recent call last):
 File "/opt/flametimewarp/bundle/inference_flame_tw.py", line 520, in <module>
   frame_value_map = bake_flame_tw_setup(args.setup, args.record_in, args.record_out)
 File "/opt/flametimewarp/bundle/inference_flame_tw.py", line 309, in bake_flame_tw_setup
   tw_speed[int(index)] = {'frame': int(frame), 'value': float(value)}
ValueError: invalid literal for int() with base 10: '1.5'

It's my basic understanding that this is because both the input and output frames need to be integers. Is there any way to work with non-integer frames? Thanks

Environment variable for tmp location

Our machines have no local frame store. Only a boot disk for OS& Apps.

This means the default /var/tmp location is not desirable. Counting on our artists to switch to a proper temp location is not reliable. Can you add an environment variable that would at least default to the location of choice.

Thanks,
Alan

Centos 8 no longer ships with Konsole

Flame now uses gnome when using Centos 8 which breaks functionality as konsole seems to be a dependency.

Can work around by installing konsole but would be highly preferable to use gnome-terminal

Option to delete shot work folder

It would be great if there was an option to delete the shot work folder when done. And an environment variable controlling its default state. Personally I see no need to keep that stuff around after the result has been imported into the flame.

engine: inference_sequence.py with --cpu support

** inference sequence with better memory usage (make_inference() should not call inself recursively)
** unneeded flags should be removed
** --cpu flag to force CPU processing on Cuda-enabled systems
** memory watcher that lowers number of threads per batch if memory is too low

flameTimewarp un-install / removal.

Hello Talosh we have been using your product for a while now. It is very good.
One of our artist requested us to remove the product from the flame.

Is there an install list of dirs / content or do you have an un-installer?
thanks in advance.
Ron

Remote Framestores

Hi,

We're currently running the latest dev build (v0.5) and experiencing an issue when working in projects stored on remote framestores. We're currently getting the error:

Error creating destination wiretap node: Unable to obtain clip format: develop/src/libcmapi/wiretap/ifffsWTEntryMgr.C:4145: Project 'X' not found.

Where 'X' is the name of the project.

It's working flawlessly when working in projects stored on the local framestore.

Cheers

Stuck with Central install

Hey,

Probably just my lack of knowledge/misreading but I can't seem to run this command in the central install instructions:
/mnt/software/flameTimewarpML/init_env /mnt/software/miniconda3/ # Linux

I get this error:
-bash: /mnt/software/flameTimewarpML/init_env: No such file or directory

I've run through all the instructions successfully so far (no errors) so not sure what I am doing wrong.
Any help would be greatly appreciated!

Cheers

Flame 2022 PR140 prompting for install

Hi Talosh,

We have TimewarpML set up for centralized install. With Flame 2022 PR140, a prompt comes up to install to local user. Flame 2021 does not do this, and the central install works fine.

no

Hi there

we're tryiing to run this on
CentOS 7.6
NVIDIA-SMI 460.91.03
RTX A6000
flame_2022.2

we get this error

Received 1 clip to process, press Ctrl+C to cancel
Initializing duplicate frames interpolation...
Trained model loaded: /home/flame/flameTimewarpML/bundle/trained_models/default/v2.4.model
Total frames: 0%| | 0/750 [00:00<?, ?frame/s('CUDA error: no kernel image is available for execution on the device in '
"<class 'RuntimeError'>")
['Traceback (most recent call last):\n',
' File "/home/flame/flameTimewarpML/bundle/inference_dpframes.py", line 298, '
'in \n'
' ICurrent = F.pad(ICurrent, padding)\n',
' File '
'"/home/flame/flameTimewarpML/miniconda3/lib/python3.8/site-packages/torch/nn/functional.py", '
'line 3553, in _pad\n'
' return _VF.constant_pad_nd(input, pad, value)\n',
'RuntimeError: CUDA error: no kernel image is available for execution on the '
'device\n']
Traceback (most recent call last):
File "/home/flame/flameTimewarpML/bundle/inference_dpframes.py", line 298, in
ICurrent = F.pad(ICurrent, padding)
File "/home/flame/flameTimewarpML/miniconda3/lib/python3.8/site-packages/torch/nn/functional.py", line 3553, in _pad
return _VF.constant_pad_nd(input, pad, value)
RuntimeError: CUDA error: no kernel image is available for execution on the device
Press Enter to continue...Exception ignored in thread started by: <function build_read_buffer at 0x7f4f54daf280>
Traceback (most recent call last):
File "/home/flame/flameTimewarpML/bundle/inference_dpframes.py", line 97, in build_read_buffer
frame_data = cv2.imread(os.path.join(user_args.input, frame), cv2.IMREAD_COLOR | cv2.IMREAD_ANYDEPTH)[:, :, ::-1].copy()
TypeError: 'NoneType' object is not subscriptable

and in the flame log it shows this

[flameTimewarpML] [flameAppFramework] waking up
[flameTimewarpML] preferences loaded from /home/flame/.config/flameTimewarpML/ep01/flameTimewarpML.IB.xtfx1.prefs
[flameTimewarpML] preferences loaded from /home/flame/.config/flameTimewarpML/ep01/flameTimewarpML.IB.prefs
[flameTimewarpML] preferences loaded from /home/flame/.config/flameTimewarpML/ep01/flameTimewarpML.prefs
[flameTimewarpML] checking existing bundle id /home/flame/flameTimewarpML/bundle/bundle_id
[flameTimewarpML] env bundle already exists with id matching current version
PYTHON : flameTimewarpML initializing
[flameTimewarpML] creating folders: /var/tmp/SKIP_GAIA_GREECE_30secs_23Dec21_DUPFR_2022JAN05_1537_69A/source
[flameTimewarpML] Executing command: echo "/var/tmp/SKIP_GAIA_GREECE_30secs_23Dec21_DUPFR_2022JAN05_1537_69A">/home/flame/flameTimewarpML/bundle/locks/B9EEB2FEBB2B44EF6D0D47EFAD7FDE6DB1AD8132.lock
[flameTimewarpML] Executing command: konsole -e /bin/bash -c 'eval "$(/home/flame/flameTimewarpML/miniconda3/bin/conda shell.bash hook)"; conda activate; cd /home/flame/flameTimewarpML/bundle; echo "Received 1 clip to process, press Ctrl+C to cancel"; trap exit SIGINT SIGTERM; python3 /home/flame/flameTimewarpML/bundle/inference_dpframes.py --model /home/flame/flameTimewarpML/bundle/trained_models/default/v2.4.model --input /var/tmp/SKIP_GAIA_GREECE_30secs_23Dec21_DUPFR_2022JAN05_1537_69A/source --output /var/tmp/SKIP_GAIA_GREECE_30secs_23Dec21_DUPFR_2022JAN05_1537_69A; echo "Commands finished. You can close this window"'
[flameTimewarpML] Importing result from: /var/tmp/SKIP_GAIA_GREECE_30secs_23Dec21_DUPFR_2022JAN05_1537_69A
[flameTimewarpML] Cleaning up temporary files used: ['/var/tmp/SKIP_GAIA_GREECE_30secs_23Dec21_DUPFR_2022JAN05_1537_69A/source']
[flameTimewarpML] Executing command: rm -f "/var/tmp/SKIP_GAIA_GREECE_30secs_23Dec21_DUPFR_2022JAN05_1537_69A/source/"*
Exception in thread Thread-2:
Traceback (most recent call last):
File "/opt/Autodesk/python/2022.2/lib/python3.7/threading.py", line 926, in _bootstrap_inner
self.run()
File "/opt/Autodesk/python/2022.2/lib/python3.7/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/opt/Autodesk/shared/python/flameTimewarpML.py", line 3138, in raise_last_window
if 'Flame' not in line:
TypeError: a bytes-like object is required, not 'str'

Doesn't work with FlameAssist

Jul 10 18:42:01 : [flameTimewarpML] creating folders: /vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/source
Jul 10 18:42:03 : [flameTimewarpML] Executing command: gnome-terminal -- /bin/bash -c 'eval "$(/vol/pipeline/miniconda3/bin/conda shell.bash hook)"; conda activate; cd /vol/pipeline/flameTimewarpML/; echo "Received 1 clip to process, press Ctrl+C to cancel"; trap exit SIGINT SIGTERM; python3 /vol/pipeline/flameTimewarpML/command_wrapper.py /vol/pipeline/flameTimewarpML/locks/47B3EFD3A1A8D1C032AAB356156E6A20E75CD89C.lock; '
Jul 10 18:43:00 : [flameTimewarpML] Importing result from: /vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C
Jul 10 18:43:00 : [flameTimewarpML] Cleaning up temporary files used: ['/vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/source']
Jul 10 18:43:00 : [flameTimewarpML] Executing command: rm -f "/vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/source/"*
Jul 10 18:43:00 : Could not create an effect of given effect type. Check effect_types property for help.
Jul 10 18:43:01 : [flameTimewarpML] Executing command: rm -f "/vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/"*
Jul 10 18:43:12 : Traceback (most recent call last):
Jul 10 18:43:12 :   File "/vol/pipeline/adsk/hooks/flameTimewarpML.py", line 3156, in import_flame_clip
Jul 10 18:43:12 :     segment.create_effect('Source Image')
Jul 10 18:43:12 : RuntimeError: Could not create an effect of given effect type. Check effect_types property for help.
Jul 10 18:43:12 : No such file or directory
Jul 10 18:43:12 : Could not decode video [frame 9]: No such file or directory
Jul 10 18:43:12 : I/O error: /vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/0000010.exr (/nfs/nas01/data01/cache/stonewire/sw01/stonefs1/5/0x454001d0100000e6.mio origin: c596f5c6-8a91-4820-b04c-dd7f6af77e73-GW(192.168.10.109:Gateway) node /vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/[0000001-0000028].exr@TRACK(19)BEAUTY:MasterBeauty:(H7)OpenEXR:(2)v0)
Jul 10 18:43:14 : No such file or directory
Jul 10 18:43:14 : Could not decode video [frame 0]: No such file or directory
Jul 10 18:43:14 : I/O error: /vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/0000001.exr (/nfs/nas01/data01/cache/stonewire/sw01/stonefs1/5/0x454001d0100000e6.mio origin: c596f5c6-8a91-4820-b04c-dd7f6af77e73-GW(192.168.10.109:Gateway) node /vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/[0000001-0000028].exr@TRACK(19)BEAUTY:MasterBeauty:(H7)OpenEXR:(2)v0)
Jul 10 18:43:14 : Cannot access frame 1: /vol/pipeline/flameTimewarpML/temp/gtu_itlrF_vers7_rev2_0629_TWML_2023JUL10_1842_21C/0000001.exr [I/O error]
Jul 10 18:43:14 : No such file or directory

Prefetcher intentionally dropped frame

I'm a support tech at a facility, our Flame op works on:
HP Z8 Workstation
x86_64 Intel Xeon Gold 6136
Nvidia Quadro P6000
CentOS 7.6

But timewarped clips just won't export. Getting the following error from the Flame shell logs:
May 30 16:05:56 : Timewarp from Flame's TW effect (beta) execute callback [<bound method flameTimewarpML.fltw of <flameTimewarpML.flameTimewarpML object at 0x7f9f7a580f10>>((<flame.PyClip object at 0x7f9f7a585bc0>,),)] failed because of an error of type "Cannot export clip 'Full_Film_Pre_TW_RG_015 : Full_Film_Pre_TW_RG_015' : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 14 fetch status Dropped request: track 0 socket Result."

Have uninstalled and reinstalled TimewarpML a couple times now to no avail. We get this issue with different clips and across multiple projects. Not sure how to troubleshoot otherwise. Full output from the shell log is below:

May 30 16:05:22 : [flameTimewarpML] [flameAppFramework] waking up
May 30 16:05:22 : [flameTimewarpML] preferences loaded from /home/flame01/.config/flameTimewarpML/flame01/flameTimewarpML.RichG.Foxy_Bingo_MrTom_02681.prefs
May 30 16:05:22 : [flameTimewarpML] preferences loaded from /home/flame01/.config/flameTimewarpML/flame01/flameTimewarpML.RichG.prefs
May 30 16:05:22 : [flameTimewarpML] preferences loaded from /home/flame01/.config/flameTimewarpML/flame01/flameTimewarpML.prefs
May 30 16:05:22 : [flameTimewarpML] checking existing bundle id /home/flame01/flameTimewarpML/bundle/bundle_id
May 30 16:05:22 : [flameTimewarpML] env bundle already exists with id matching current version
May 30 16:05:22 : PYTHON        : flameTimewarpML initializing
May 30 16:05:42 : [flameTimewarpML] creating folders: /var/tmp/Full_Film_Pre_TW_RG_015_TWML_2022MAY30_1605_F98/source
May 30 16:05:44 : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 14 fetch status Dropped request
May 30 16:05:44 : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 14 fetch status Dropped request: track 0 socket Result
May 30 16:05:45 : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 14 fetch status Dropped request: track 0 socket Result
May 30 16:05:56 : Timewarp from Flame's TW effect (beta) execute callback [<bound method flameTimewarpML.fltw of <flameTimewarpML.flameTimewarpML object at 0x7f9f7a580f10>>((<flame.PyClip object at 0x7f9f7a585bc0>,),)] failed because of an error of type "Cannot export clip 'Full_Film_Pre_TW_RG_015 : Full_Film_Pre_TW_RG_015' : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 14 fetch status Dropped request: track 0 socket Result."
May 30 16:14:16 : [flameTimewarpML] creating folders: /var/tmp/Full_Film_Pre_TW_RG_015_TWML_2022MAY30_1614_2C1/source
May 30 16:14:23 : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 83 fetch status Dropped request
May 30 16:14:23 : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 83 fetch status Dropped request: track 0 socket Result
May 30 16:14:23 : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 83 fetch status Dropped request: track 0 socket Result
May 30 16:14:35 : Timewarp from Flame's TW effect (beta) execute callback [<bound method flameTimewarpML.fltw of <flameTimewarpML.flameTimewarpML object at 0x7f9f7a580f10>>((<flame.PyClip object at 0x7f9f7b0df530>,),)] failed because of an error of type "Cannot export clip 'Full_Film_Pre_TW_RG_015 : Full_Film_Pre_TW_RG_015' : Source: Full_Film_Pre_TW_RG_015: Prefetcher intentionally dropped frame 83 fetch status Dropped request: track 0 socket Result."

Linux Flame2025

Hi,
I am trying to install FlameTimewarpML in Flame 2025 with Rocky 8.7 but I don't have success. Any advice to doing it?

Thank you,
I

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.