Code Monkey home page Code Monkey logo

keras-flappybird's Introduction

Keras-FlappyBird

A single 200 lines of python code to demostrate DQN with Keras

Please read the following blog for details

https://yanpanlau.github.io/2016/07/10/FlappyBird-Keras.html

Installation Dependencies:

  • Python 2.7
  • Keras 1.0
  • pygame
  • scikit-image

How to Run?

CPU only

git clone https://github.com/yanpanlau/Keras-FlappyBird.git
cd Keras-FlappyBird
python qlearn.py -m "Run"

GPU version (Theano)

git clone https://github.com/yanpanlau/Keras-FlappyBird.git
cd Keras-FlappyBird
THEANO_FLAGS=device=gpu,floatX=float32,lib.cnmem=0.2 python qlearn.py -m "Run"

If you want to train the network from beginning, delete the model.h5 and run qlearn.py -m "Train"

keras-flappybird's People

Contributors

kgullion avatar yanpanlau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-flappybird's Issues

TypeError: <lambda>() got an unexpected keyword argument 'dim_ordering'

This is my first time run this code,and I got the error.
Keras=1.2.2
using theano backend.

I do not know how to solve this problem~

libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
Using Theano backend.
Now we build the model
Traceback (most recent call last):
File "qlearn.py", line 198, in
main()
File "qlearn.py", line 195, in main
playGame(args)
File "qlearn.py", line 188, in playGame
model = buildmodel()
File "qlearn.py", line 44, in buildmodel
model.add(Convolution2D(32, 8, 8, subsample=(4,4),init=lambda shape, name: normal(shape, scale=0.01, name=name), border_mode='same',input_shape=(img_channels,img_rows,img_cols)))
File "/home/kaido/anaconda2/lib/python2.7/site-packages/keras/models.py", line 299, in add
layer.create_input_layer(batch_input_shape, input_dtype)
File "/home/kaido/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 401, in create_input_layer
self(x)
File "/home/kaido/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 546, in call
self.build(input_shapes[0])
File "/home/kaido/anaconda2/lib/python2.7/site-packages/keras/layers/convolutional.py", line 436, in build
constraint=self.W_constraint)
File "/home/kaido/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 418, in add_weight
weight = initializer(shape, name=name)
TypeError: () got an unexpected keyword argument 'dim_ordering'

TabError: inconsistent use of tabs and spaces in indentation

Hello, I get the following error when trying to run your code:

C:\Users\J\Keras-FlappyBird>python qlearn.py -m "Run"
Traceback (most recent call last):
File "qlearn.py", line 11, in
import wrapped_flappy_bird as game
File "game\wrapped_flappy_bird.py", line 144
FPSCLOCK.tick(FPS)

TabError: inconsistent use of tabs and spaces in indentation

Have you ever seen this before? Thank you.

something wrong with import wrapped_flappy_bird as game

Traceback (most recent call last):
File "qlearn.py", line 11, in
import wrapped_flappy_bird as game
File "game/wrapped_flappy_bird.py", line 19, in
IMAGES, SOUNDS, HITMASKS = flappy_bird_utils.load()
File "game/flappy_bird_utils.py", line 21, in load
pygame.image.load('assets/sprites/0.png').convert_alpha(),
pygame.error: File is not a Windows BMP file

File is not a Windows BMP file

~/Keras-FlappyBird$ python qlearn.py -m "Run" Recommended matplotlib backend isAggfor full skimage.viewer functionality. 2016-07-25 09:28:56.396 Python[32404:8035262] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/h6/5yk5fw6x58l_s99ypvht_r8w0000gn/T/org.python.python.savedState Traceback (most recent call last): File "qlearn.py", line 11, in <module> import wrapped_flappy_bird as game File "game/wrapped_flappy_bird.py", line 19, in <module> IMAGES, SOUNDS, HITMASKS = flappy_bird_utils.load() File "game/flappy_bird_utils.py", line 21, in load pygame.image.load('assets/sprites/0.png').convert_alpha(), pygame.error: File is not a Windows BMP file

Result tends to be 1 after 1 million step training

Though trained weights is available in this repository, I still want to train the model from scratch by myself.
But after 1 million step training, all the output is 1. And I didn't modify any source code.
So I am confused . What 's wrong with me ? Or are there any tips I should use ?

Thanks for answering my questions !

TypeError:__init__() got an unexpected keyword argument 'input_shape'

When I run the code by "python qlearn.py -m "Run"", the terminal shows like the title describes.
and the error occurs at "model.add(Convolution2D(32, 8, 8, subsample=(4,4),init=lambda shape, name: normal(shape, scale=0.01, name=name), border_mode='same',input_shape=(img_channels,img_rows,img_cols)))"

Pre-trained model does not work

I'm using Tensorflow 1.7 and Keras 2.1.6. Doing a git clone followed by "python qlearn.py -m Run" should run the pre-trained model, right? However, the result does not look good. Flappy Bird immediately crashes at the first pipe.

How to train the code on an ubuntu server?

Hey, I want to train it on a ubuntu server, but I first met the problem:
Pygame.display.init() error: “No available video device”
and then I found an answer here:http://stackoverflow.com/questions/10220104/pygame-error-video-system-not-initialized-on-ubuntu-server-with-only-terminal
I try this: os.environ["SDL_VIDEODRIVER"] = "dummy"
but then I got a new problem:
File "game/wrapped_flappy_bird.py", line 135, in frame_step
SCREEN.blit(IMAGES['pipe'][0], (uPipe['x'], uPipe['y']))
pygame.error: Blit combination not supported

I wonder if there is a effective way to train the code on a ubuntu server?
Thank you so much!

buildmodel fails on Convolution2D filter_size vs input_size?

buildmodel fails because the filter_size mismatches the input_size.
i don't understand why input_size is reported as (4,80) when clearly (4,80,80) is passed as input_shape..

i'm using keras version 1.1.0

any ideas?

Using TensorFlow backend.
Now we build the model
(4, 80, 80)
Traceback (most recent call last):
  File "qlearn.py", line 199, in <module>
    main()
  File "qlearn.py", line 196, in main
    playGame(args)
  File "qlearn.py", line 189, in playGame
    model = buildmodel()
  File "qlearn.py", line 45, in buildmodel
    model.add(Convolution2D(32, 8, 8, subsample=(4,4),init=lambda shape, name: normal(shape, scale=0.01, name=name), border_mode='same',input_shape=(img_channels,img_rows,img_cols)))
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/keras/models.py", line 276, in add
    layer.create_input_layer(batch_input_shape, input_dtype)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 370, in create_input_layer
    self(x)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 514, in __call__
    self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 572, in add_inbound_node
    Node.create_node(self, inbound_layers, node_indices, tensor_indices)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/keras/engine/topology.py", line 149, in create_node
    output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/keras/layers/convolutional.py", line 466, in call
    filter_shape=self.W_shape)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1579, in conv2d
    x = tf.nn.conv2d(x, kernel, strides, padding=padding)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 394, in conv2d
    data_format=data_format, name=name)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op
    op_def=op_def)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2319, in create_op
    set_shapes_for_outputs(ret)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1711, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 246, in conv2d_shape
    padding)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 184, in get2d_conv_output_size
    (row_stride, col_stride), padding_type)
  File "/Users/noun/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 149, in get_conv_output_size
    "Filter: %r Input: %r" % (filter_size, input_size))
ValueError: Filter must not be larger than the input: Filter: (8, 8) Input: (4, 80)

initial bias towards action=1?

why does the network start with such a strong bias towards trying action 1 every timestep?

i only occasionally see action=0.

i looks like it would be difficult to break out of this pattern since it receives reward = 0.1 for it before encountering the first pipe-gate..

CorrMM images and kernel must have the same stack size

Hi there! Thanks for the great blog post. I ran your code with theano as the backend and I'm getting the stacktrace below. Any idea? It says it loaded the model but that there is an issue with the images and kernel needing to have the same stack size.

Also, are you running it with theano or tensorflow as the backend?

Thanks!

Marion

python qlearn.py -m "Run"
Recommended matplotlib backend is `Agg` for full skimage.viewer functionality.
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
Using Theano backend.
Now we build the model
We finish building the model
Now we load weight
Weight load successfully
Traceback (most recent call last):
  File "qlearn.py", line 198, in <module>
    main()
  File "qlearn.py", line 195, in main
    playGame(args)
  File "qlearn.py", line 189, in playGame
    trainNetwork(model,args)
  File "qlearn.py", line 107, in trainNetwork
    q = model.predict(s_t)       #input a stack of 4 images, get the prediction
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/models.py", line 671, in predict
    return self.model.predict(x, batch_size=batch_size, verbose=verbose)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/engine/training.py", line 1179, in predict
    batch_size=batch_size, verbose=verbose)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/engine/training.py", line 878, in _predict_loop
    batch_outs = f(ins_batch)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/backend/theano_backend.py", line 717, in __call__
    return self.function(*inputs)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/theano/compile/function_module.py", line 871, in __call__
    storage_map=getattr(self.fn, 'storage_map', None))
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/theano/gof/link.py", line 314, in raise_with_op
    reraise(exc_type, exc_value, exc_trace)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/theano/compile/function_module.py", line 859, in __call__
    outputs = self.fn()
ValueError: CorrMM images and kernel must have the same stack size

Apply node that caused the error: CorrMM{half, (4, 4)}(InplaceDimShuffle{0,3,1,2}.0, Subtensor{::, ::, ::int64, ::int64}.0)
Toposort index: 22
Inputs types: [TensorType(float32, 4D), TensorType(float32, 4D)]
Inputs shapes: [(1, 80, 4, 80), (8, 8, 32, 4)]
Inputs strides: [(320, 4, 25600, 320), (4, 32, -1024, -256)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Subtensor{int64:int64:int8, int64:int64:int8, int64:int64:int8, int64:int64:int8}(CorrMM{half, (4, 4)}.0, ScalarFromTensor.0, ScalarFromTensor.0, Constant{1}, Constant{0}, Constant{32}, Constant{1}, ScalarFromTensor.0, ScalarFromTensor.0, Constant{1}, ScalarFromTensor.0, ScalarFromTensor.0, Constant{1})]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
  File "qlearn.py", line 44, in buildmodel
    model.add(Convolution2D(32, 8, 8, subsample=(4,4),init=lambda shape, name: normal(shape, scale=0.01, name=name), border_mode='same',input_shape=(img_channels,img_rows,img_cols)))
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/models.py", line 276, in add
    layer.create_input_layer(batch_input_shape, input_dtype)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/engine/topology.py", line 370, in create_input_layer
    self(x)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/engine/topology.py", line 514, in __call__
    self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/engine/topology.py", line 572, in add_inbound_node
    Node.create_node(self, inbound_layers, node_indices, tensor_indices)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/engine/topology.py", line 149, in create_node
    output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/layers/convolutional.py", line 466, in call
    filter_shape=self.W_shape)
  File "/Users/mleborgne/Library/Python/2.7/lib/python/site-packages/keras/backend/theano_backend.py", line 1135, in conv2d
    filter_shape=filter_shape)

Getting error on running

I am running this on cpu only. I am getting this on running.

Traceback (most recent call last): File "qlearn.py", line 5, in <module> import skimage as skimage File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\__init__.py", line 167, in <module> from .util.dtype import (img_as_float32, File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\util\__init__.py", line 8, in <module> from .arraycrop import crop File "C:\Users\Win 10\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\util\arraycrop.py", line 8, in <module> from numpy.lib.arraypad import _validate_lengths ImportError: cannot import name '_validate_lengths'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.