Comments (26)
hey first of all try:
./1_movie2frames avconv BUBBLES.mov movFrames
you're supposed to provide ffmpeg OR avconv as an argument.
any luck getting further with that?
from deepdreamvideo.
I converted the video to frames via blender for the time being to move on to step two.
I dropped those stills into the frames folder and executed the python script and it returned cannot launch GPU with CPU only so I'm remaking caffe without CPU only... perhaps there is a way to run your code under CPU only?
I tried your recommendation and this is what it returned:
r2d2@r2d2-VirtualBox:/Downloads/DeepDreamVideo$ ./1_movie2frames avconv BUBBLES.mov movFrames/Downloads/DeepDreamVideo$ ls
bash: ./1_movie2frames: No such file or directory
r2d2@r2d2-VirtualBox:
1_movie2frames.sh 3_frames2movie.sh frames2gif.sh movFrames README.md
2_dreaming_time.py frames LICENSE processed tmp.prototxt
from deepdreamvideo.
you forgot the extension: what about
./1_movie2frames.sh avconv BUBBLES.mov movFrames
from deepdreamvideo.
r2d2@r2d2-VirtualBox:~/Downloads/DeepDreamVideo$ ./1_movie2frames.sh avconv BUBBLES.mov movFrames
./1_movie2frames.sh: line 12: ffmpeg: command not found
I guess Ubuntu 14.04 doesn't have ffmpeg... but shouldn't avconv cover for that?
Thanks for your help!
from deepdreamvideo.
is there a way to run the 2_dreaming_time.py script as CPU only?
from deepdreamvideo.
Seems like it'll run CPU only unless you pass the -g
flag (from #14)
Edit: and from the readme:
(don't forget the --gpu flag if you got a gpu to run on, where 0 is the index of
the gpu you'd like to use if you have more than 1)
from deepdreamvideo.
great thanks, that works! Now the next problem...
It starts and then it crashes... before it returned something about not allocating enough memory and doing a core dump...
Processing frame #1
/home/r2d2/anaconda/lib/python2.7/site-packages/scipy/ndimage/interpolation.py:549: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
"the returned array has changed.", UserWarning)
(0, 0, 'inception_4d/output', (320, 569, 3))
(0, 1, 'inception_4d/output', (320, 569, 3))
(0, 2, 'inception_4d/output', (320, 569, 3))
(0, 3, 'inception_4d/output', (320, 569, 3))
(0, 4, 'inception_4d/output', (320, 569, 3))
(1, 0, 'inception_4d/output', (480, 853, 3))
(1, 1, 'inception_4d/output', (480, 853, 3))
(1, 2, 'inception_4d/output', (480, 853, 3))
(1, 3, 'inception_4d/output', (480, 853, 3))
(1, 4, 'inception_4d/output', (480, 853, 3))
(2, 0, 'inception_4d/output', (720, 1280, 3))
Killed
I'm running a virtual machine so I think I might not have enough available ram or storage for the script to complete or since I'm doing CPU only its running out of buffer space. Getting closer!
from deepdreamvideo.
I'm having the exactly same problem as @reillydonovan
from deepdreamvideo.
I'm using an Intel i3 core processor to do the job, no GPU card on my notebook with Ubuntu 14.04. I think the problem has to do with CPU lacking of 'horsepower' ๐ or something like that.
from deepdreamvideo.
I opened another issue with this @reillydonovan --> #20
from deepdreamvideo.
As I understand, its possible to run the code now with just cpu, correct?
as for the task being killed after the first frame as @reillydonovan writes, let's continue at #20.
from deepdreamvideo.
@trotskylenin @reillydonovan Ive pulled a new PR for the cpu behavior. its not needed to give an argument to the -g flag anymore.
could you test if the code works now with cpu only?
from deepdreamvideo.
Ive started a wiki with advise as well, feel free to add to it: https://github.com/graphific/DeepDreamVideo/wiki
from deepdreamvideo.
Can 2_dreaming_time.py run with a boot2docker dreamer such as ryankennedy's? I've managed to get 1_movies2frames.sh working, but the output seems a little short. For a 6 second video I only get 50 frames.
Thanks
from deepdreamvideo.
@Acanterelle I'm going to try to dockerize this tonight
from deepdreamvideo.
@Acanterelle yes that would be short indeed: with a framerate of 25, you'd expect 6 x 25 = 150 frames
from deepdreamvideo.
Could people post some example timings please - of how long it takes 2_dreaming_time.py to run per frame?
from deepdreamvideo.
@graphific I know it's short! :) I wonder if the error is at my command line entry or if theres an issue somewhere else.
from deepdreamvideo.
ill try to run cpu style when I can find some time as well...
from deepdreamvideo.
@rosshamish Awesome!
from deepdreamvideo.
Hey heres a thought... Is it possible to demux the frames of a target video to use as a guide when rendering an animation? Or will providing --guide-image waving_flowers.mp4 cause the .sh to crash?
from deepdreamvideo.
The h264 codec used in most mp4 videos is not intraframe, that means that
you cannot simply "demux" or unwrap the frames,because only some of them
exist really (the i frames) and the rest are just bidirectional motion
tracking information pointing to those i frames (search "h264 gop" for more
details). This means you need an intermediate transcoding to recreate those
frames. So you need FFMPEG / Avconv / Mplayer again but with the guide
video as input. Only this way is possible to do what you want.
El 12/07/2015 18:40, "Acanterelle" [email protected] escribiรณ:
Hey heres a thought... Is it possible to demux the frames of a target
video to use as a guide when rendering an animation? Or will providing
--guide-image waving_flowers.mp4 cause the .sh to crash?โ
Reply to this email directly or view it on GitHub
#16 (comment)
.
from deepdreamvideo.
@trotskylenin You lost a little in translation there... The particularly important question was "can this handle using a target video rather than a single still image?" 1_moveies2frames.sh can already handle breaking up a video into a frame sequence with ffmpeg or avconv.
Whichever way we arrive at the frames, is it possible as is, to input an image sequence as a target image rather than inputting one target image.
from deepdreamvideo.
well you could of course rip the frames from the guide video and feed those as a guide to the dream frames, frame by frame. The duration / amount of frames have to be synced however.
we could imagine adding a --guide_directory flag, but again what should the behaviour be when theres less guide frames than original frames?
from deepdreamvideo.
--guide_directory flag is exactly it!
I think dictating the behaviour will be easier from a graphic design perspective, rather than a computer science perspective. The timestretch functionality in AE springs to mind. Ideally this would be done from a visual timeline. Being able to preview and scrub through the two image sequences in tandem prior to applying the dream will greatly ease the decision making process.
Of course why not just handle that as part of a pipeline? Blender can
One might envision hacking Blender's GUI down to the bare bones, stripping out essentially everything but the timeline and image sequence editor and then using the "add image strips" + "preview panel" to handle the lining up of the image sequences before sending a call to the Ai. That could save a lot of heavy lifting, and it would make creative decisions a thing of visual science, rather than fire and forget from the command line.
It might even be conceivable to do something like an PyOpenGL playblast... which would save a lot of design time if you don't like the basic look of the dream.
from deepdreamvideo.
Yes, of course. It would be great to build a GUI for this... but well, the project is still miles ahead from that.
from deepdreamvideo.
Related Issues (20)
- SIGSEGV (Address boundary error) HOT 11
- Runtime error in 2_dreaming_time.py script: "KeyError: 'inception_4d/output'" HOT 1
- I couldn't run at AWS with ami HOT 5
- 1080p GPU Memory Recommendation? HOT 1
- Protobuf error on certain models HOT 1
- Blend set to string
- Error with open ocl dir HOT 1
- Rollaxis error: numpy version? HOT 1
- Unexpected result HOT 1
- Error when I try to run 2_dreaming_time.py HOT 3
- running DeepDreamVideo in Docker container
- Out of Memory With 1 Iteration HOT 1
- I have no clue how to fix this
- numpy broadcast error
- Reddit windows guide has been taken down HOT 1
- AttributeError: 'NoneType' object has no attribute 'format' HOT 5
- How do I install the required libraries? HOT 4
- Outdated python code HOT 1
- Please Help!!!
- Spelling error.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deepdreamvideo.