Code Monkey home page Code Monkey logo

Comments (26)

graphific avatar graphific commented on August 18, 2024

hey first of all try:
./1_movie2frames avconv BUBBLES.mov movFrames
you're supposed to provide ffmpeg OR avconv as an argument.

any luck getting further with that?

from deepdreamvideo.

reillydonovan avatar reillydonovan commented on August 18, 2024

I converted the video to frames via blender for the time being to move on to step two.
I dropped those stills into the frames folder and executed the python script and it returned cannot launch GPU with CPU only so I'm remaking caffe without CPU only... perhaps there is a way to run your code under CPU only?

I tried your recommendation and this is what it returned:

r2d2@r2d2-VirtualBox:/Downloads/DeepDreamVideo$ ./1_movie2frames avconv BUBBLES.mov movFrames
bash: ./1_movie2frames: No such file or directory
r2d2@r2d2-VirtualBox:
/Downloads/DeepDreamVideo$ ls
1_movie2frames.sh 3_frames2movie.sh frames2gif.sh movFrames README.md
2_dreaming_time.py frames LICENSE processed tmp.prototxt

from deepdreamvideo.

graphific avatar graphific commented on August 18, 2024

you forgot the extension: what about
./1_movie2frames.sh avconv BUBBLES.mov movFrames

from deepdreamvideo.

reillydonovan avatar reillydonovan commented on August 18, 2024

r2d2@r2d2-VirtualBox:~/Downloads/DeepDreamVideo$ ./1_movie2frames.sh avconv BUBBLES.mov movFrames
./1_movie2frames.sh: line 12: ffmpeg: command not found

I guess Ubuntu 14.04 doesn't have ffmpeg... but shouldn't avconv cover for that?
Thanks for your help!

from deepdreamvideo.

reillydonovan avatar reillydonovan commented on August 18, 2024

is there a way to run the 2_dreaming_time.py script as CPU only?

from deepdreamvideo.

rosshamish avatar rosshamish commented on August 18, 2024

Seems like it'll run CPU only unless you pass the -g flag (from #14)

Edit: and from the readme:

(don't forget the --gpu flag if you got a gpu to run on, where 0 is the index of 
the gpu you'd like to use if you have more than 1)

from deepdreamvideo.

reillydonovan avatar reillydonovan commented on August 18, 2024

great thanks, that works! Now the next problem...
It starts and then it crashes... before it returned something about not allocating enough memory and doing a core dump...

Processing frame #1
/home/r2d2/anaconda/lib/python2.7/site-packages/scipy/ndimage/interpolation.py:549: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
"the returned array has changed.", UserWarning)
(0, 0, 'inception_4d/output', (320, 569, 3))
(0, 1, 'inception_4d/output', (320, 569, 3))
(0, 2, 'inception_4d/output', (320, 569, 3))
(0, 3, 'inception_4d/output', (320, 569, 3))
(0, 4, 'inception_4d/output', (320, 569, 3))
(1, 0, 'inception_4d/output', (480, 853, 3))
(1, 1, 'inception_4d/output', (480, 853, 3))
(1, 2, 'inception_4d/output', (480, 853, 3))
(1, 3, 'inception_4d/output', (480, 853, 3))
(1, 4, 'inception_4d/output', (480, 853, 3))
(2, 0, 'inception_4d/output', (720, 1280, 3))
Killed

I'm running a virtual machine so I think I might not have enough available ram or storage for the script to complete or since I'm doing CPU only its running out of buffer space. Getting closer!

from deepdreamvideo.

trotskylenin avatar trotskylenin commented on August 18, 2024

I'm having the exactly same problem as @reillydonovan

from deepdreamvideo.

trotskylenin avatar trotskylenin commented on August 18, 2024

I'm using an Intel i3 core processor to do the job, no GPU card on my notebook with Ubuntu 14.04. I think the problem has to do with CPU lacking of 'horsepower' ๐Ÿ‘Ž or something like that.

from deepdreamvideo.

trotskylenin avatar trotskylenin commented on August 18, 2024

I opened another issue with this @reillydonovan --> #20

from deepdreamvideo.

graphific avatar graphific commented on August 18, 2024

As I understand, its possible to run the code now with just cpu, correct?

as for the task being killed after the first frame as @reillydonovan writes, let's continue at #20.

from deepdreamvideo.

graphific avatar graphific commented on August 18, 2024

@trotskylenin @reillydonovan Ive pulled a new PR for the cpu behavior. its not needed to give an argument to the -g flag anymore.

could you test if the code works now with cpu only?

from deepdreamvideo.

graphific avatar graphific commented on August 18, 2024

Ive started a wiki with advise as well, feel free to add to it: https://github.com/graphific/DeepDreamVideo/wiki

from deepdreamvideo.

Acanterelle avatar Acanterelle commented on August 18, 2024

Can 2_dreaming_time.py run with a boot2docker dreamer such as ryankennedy's? I've managed to get 1_movies2frames.sh working, but the output seems a little short. For a 6 second video I only get 50 frames.

Thanks

from deepdreamvideo.

rosshamish avatar rosshamish commented on August 18, 2024

@Acanterelle I'm going to try to dockerize this tonight

from deepdreamvideo.

graphific avatar graphific commented on August 18, 2024

@Acanterelle yes that would be short indeed: with a framerate of 25, you'd expect 6 x 25 = 150 frames

from deepdreamvideo.

molexx avatar molexx commented on August 18, 2024

Could people post some example timings please - of how long it takes 2_dreaming_time.py to run per frame?

from deepdreamvideo.

Acanterelle avatar Acanterelle commented on August 18, 2024

@graphific I know it's short! :) I wonder if the error is at my command line entry or if theres an issue somewhere else.

from deepdreamvideo.

graphific avatar graphific commented on August 18, 2024

ill try to run cpu style when I can find some time as well...

from deepdreamvideo.

Acanterelle avatar Acanterelle commented on August 18, 2024

@rosshamish Awesome!

from deepdreamvideo.

Acanterelle avatar Acanterelle commented on August 18, 2024

Hey heres a thought... Is it possible to demux the frames of a target video to use as a guide when rendering an animation? Or will providing --guide-image waving_flowers.mp4 cause the .sh to crash?

from deepdreamvideo.

trotskylenin avatar trotskylenin commented on August 18, 2024

The h264 codec used in most mp4 videos is not intraframe, that means that
you cannot simply "demux" or unwrap the frames,because only some of them
exist really (the i frames) and the rest are just bidirectional motion
tracking information pointing to those i frames (search "h264 gop" for more
details). This means you need an intermediate transcoding to recreate those
frames. So you need FFMPEG / Avconv / Mplayer again but with the guide
video as input. Only this way is possible to do what you want.
El 12/07/2015 18:40, "Acanterelle" [email protected] escribiรณ:

Hey heres a thought... Is it possible to demux the frames of a target
video to use as a guide when rendering an animation? Or will providing
--guide-image waving_flowers.mp4 cause the .sh to crash?

โ€”
Reply to this email directly or view it on GitHub
#16 (comment)
.

from deepdreamvideo.

Acanterelle avatar Acanterelle commented on August 18, 2024

@trotskylenin You lost a little in translation there... The particularly important question was "can this handle using a target video rather than a single still image?" 1_moveies2frames.sh can already handle breaking up a video into a frame sequence with ffmpeg or avconv.

Whichever way we arrive at the frames, is it possible as is, to input an image sequence as a target image rather than inputting one target image.

from deepdreamvideo.

graphific avatar graphific commented on August 18, 2024

well you could of course rip the frames from the guide video and feed those as a guide to the dream frames, frame by frame. The duration / amount of frames have to be synced however.
we could imagine adding a --guide_directory flag, but again what should the behaviour be when theres less guide frames than original frames?

from deepdreamvideo.

Acanterelle avatar Acanterelle commented on August 18, 2024

--guide_directory flag is exactly it!

I think dictating the behaviour will be easier from a graphic design perspective, rather than a computer science perspective. The timestretch functionality in AE springs to mind. Ideally this would be done from a visual timeline. Being able to preview and scrub through the two image sequences in tandem prior to applying the dream will greatly ease the decision making process.

Of course why not just handle that as part of a pipeline? Blender can

One might envision hacking Blender's GUI down to the bare bones, stripping out essentially everything but the timeline and image sequence editor and then using the "add image strips" + "preview panel" to handle the lining up of the image sequences before sending a call to the Ai. That could save a lot of heavy lifting, and it would make creative decisions a thing of visual science, rather than fire and forget from the command line.

It might even be conceivable to do something like an PyOpenGL playblast... which would save a lot of design time if you don't like the basic look of the dream.

from deepdreamvideo.

trotskylenin avatar trotskylenin commented on August 18, 2024

Yes, of course. It would be great to build a GUI for this... but well, the project is still miles ahead from that.

from deepdreamvideo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.