Code Monkey home page Code Monkey logo

keras-complex's Introduction

Complex-Valued Neural Networks in Keras with Tensorflow

Documentation Build Status PyPI Versions Tensorflow 2+ PyPI - Downloads PyPI Status PyPI License

Complex-valued convolutions could provide some interesting results in signal processing-based deep learning. A simple(-ish) idea is including explicit phase information of time series in neural networks. This code enables complex-valued convolution in convolutional neural networks in keras with the TensorFlow backend. This makes the network modular and interoperable with standard keras layers and operations.

This code is very much in Alpha. Please consider helping out improving the code to advance together. This repository is based on the code which reproduces experiments presented in the paper Deep Complex Networks. It is a port to Keras with Tensorflow-backend.

Requirements

  • numpy
  • scipy
  • scikit-learn
  • tensorflow 2.X

Install requirements for computer vision experiments with pip:

pip install -r requirements.txt

Depending on your Python installation you might want to use anaconda or venv or other tools.

Installation

pip install keras-complex

Usage

Build your neural networks with the help of keras.

import complexnn

import keras
from keras import models
from keras import layers
from keras import optimizers

model = models.Sequential()

model.add(complexnn.conv.ComplexConv2D(32, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 2)))
model.add(complexnn.bn.ComplexBatchNormalization())
model.add(layers.MaxPooling2D((2, 2), padding='same'))

model.compile(optimizer=optimizers.Adam(), loss='mse')

An example working implementation of an autoencoder can be found here.

Complex Format of Tensors

This library assumes that complex values are split into two real-valued parts. The real-valued and complex-valued complement, also seen in the Docs.

The tensors for a 2D complex tensor of 3x3, the look like:

[[[r r r],
  [r r r],
  [r r r]],
  [i,i,i],
  [i,i,i],
  [i,i,i]]]

So multiple samples should then be arranged into [r,r,r,i,i,i], which is also documented in the Docs.

Citation

Find the CITATION file or cite this software version as:

@misc{dramsch2019complex, 
    title     = {Complex-Valued Neural Networks in Keras with Tensorflow}, 
    url       = {https://figshare.com/articles/Complex-Valued_Neural_Networks_in_Keras_with_Tensorflow/9783773/1}, 
    DOI       = {10.6084/m9.figshare.9783773}, 
    publisher = {figshare}, 
    author    = {Dramsch, Jesper S{\"o}ren and Contributors}, 
    year      = {2019}
}

Please cite the original work as:

@ARTICLE {Trabelsi2017,
    author  = "Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian, João Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, Christopher J Pal",
    title   = "Deep Complex Networks",
    journal = "arXiv preprint arXiv:1705.09792",
    year    = "2017"
}

keras-complex's People

Contributors

blubbaa avatar casabre avatar chihebtrabelsi avatar dmitriy-serdyuk avatar edowson avatar gauss256 avatar jesperdramsch avatar obilaniu avatar oisinmoran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

keras-complex's Issues

Interpretation of complex_conv2d layer's output

First of all, thank you for the brilliant repo.

I have a doubt regarding the interpretation of the output of the complex_conv2d layer (cc2l). A cc2l with 64 kernels has an output of (None, None, None, 128), meaning we have 64 real and 64 imaginary matrices. I want to know what is the correct set of r and i, i.e., are the first 64 matrices real and rest imaginary or the odd-numbered (1, 3, 5...) ones are real and even-numbered (2, 4, 6,...) ones imaginary.

Basic Usage

I'm wondering what the assumed input dimensions are, if the input data is complex. For example, if I want to perform a complex 1D convolution on complex data, would the assumed data input be

[batch_index, 2, N] or [batch_index, N, 2] or [batch_index, 2*N] where N is the number of complex numbers?

As far as I can tell, your examples demonstrate leveraging complex operations on intrinsically real data, rather than intrinsically complex data, so I was hoping for some clarification.

Error in ComplexBatchNormalization

Hello,

I keep running into this error with ComplexBatchNormalization when training a simple model.

Traceback (most recent call last):
  File "/data/phlav/complex-env/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/data/phlav/complex-env/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/data/phlav/complex-env/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable complex_batch_normalization_2/Variable_3 from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/complex_batch_normalization_2/Variable_3)
         [[{{node complex_batch_normalization_2/ReadVariableOp_3}}]]

I even reproduced it while running src/mass_train.py in https://github.com/JesperDramsch/Complex-CNN-Seismic

Any idea what could be the cause?

I am using python=3.6, tensorflow-gpu=1.15.2, keras=2.3.1.

ValueError: Operands could not be broadcast together with shapes (32, 16, 44) (32, 16, 22)

I'm trying to run the cifar10 examples with tensorflow-gpu==1.13.1 and keras==2.2.4.

I get the following error when trying to train using the complex model:

python scripts/run.py train --datadir $FUEL_DATA_PATH/cifar10 --dataset cifar10 --workdir $OUTPUT_DIR --model complex

ValueError: Operands could not be broadcast together with shapes (32, 16, 44) (32, 16, 22)

Would you happen to know how to solve this?

The training script works when using the real model.

Issues with Complex Dense Layers

Hello,

I was trying to use the complexnn.dense.ComplexDense() layer to form an ANN with complex number. The input data is 432 60-point complex arrays and have a shape(432,120) which I concatenate the Real and Imaginary part. My network structure is as follows:
image

Then I compiled and fit the network while it gives me error like this:
image

I would like to ask that is the problem coming from the wrong input_dim , wrong batch_size assignment, or bad output neuron setting?

Also, is there any available examples for the ComplexDense() layer application? Since I only saw ComplexConv2D() example in README.

Very appreciate it!

ImportErros when import complexnn

Hi,

I have installed keras-complex using command "pip install keras-complex".

But when I try to import complexnn, the following ImportError was reported:
from keras.layers.convolutional import _Conv
ImportError: cannot import name '_Conv'

I had a quick search online and there seems to be some compatibility issue between keras and tensorlfow, maybe?
But I am not sure how to fix it. Is there a specific keras+tensorflow version combo we are expecting?

Thanks,

Buffy

Hint for format of complex data array in README

I am wondering how should the data be prepared in order to work with complexnn. I realized that complex* numpy arrays are not working, e.g. [ 1+1j, 2+1j*2, 3 + 1j *3 ].

Should the format of a three element vector like this ``IIIQQQ` in order to work with a dense layer?

Thanks & best regards

pool and activation

hello,
I have several questions.fisrt the max pooling layer does not contain the complex pooling layer. Can the max pooling kernel of the real be the same?
second I can't find the crelu and modrelu,is it in this work?

Problem with ComplexBN

Whenever I try to use ComplexBN, it shows the following error:

`
File "F:\HR\adaptive_nn\single_mode_c_ind.py", line 364, in
H40 = ComplexBN()(H4)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\keras\backend\tensorflow_backend.py", line 75, in symbolic_fn_wrapper
return func(*args, **kwargs)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\keras\engine\base_layer.py", line 463, in call
self.build(unpack_singleton(input_shapes))

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\complexnn\bn.py", line 361, in build
initializer=self.gamma_diag_initializer,

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\keras\engine\base_layer.py", line 282, in add_weight
constraint=constraint)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\keras\backend\tensorflow_backend.py", line 620, in variable
value, dtype=dtype, name=name, constraint=constraint)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\keras\backend.py", line 814, in variable
constraint=constraint)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\ops\variables.py", line 260, in call
return cls._variable_v2_call(*args, **kwargs)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\ops\variables.py", line 254, in _variable_v2_call
shape=shape)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\ops\variables.py", line 235, in
previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\ops\variable_scope.py", line 2645, in default_variable_creator_v2
shape=shape)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\ops\variables.py", line 262, in call
return super(VariableMetaclass, cls).call(*args, **kwargs)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py", line 1411, in init
distribute_strategy=distribute_strategy)

File "C:\Users\USER\anaconda3\envs\tensorflow_env\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py", line 1494, in _init_from_args
raise ValueError("Tensor-typed variable initializers must either be "

ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., tf.Variable(lambda : tf.truncated_normal([10, 40]))) when building functions. Please file a feature request if this restriction inconveniences you.`

I am not sure why is this happening. Is it because of version mismatch? My TensorFlow version is 2.1.0 and My Keras version is 2.3.1. How do I get around this issue?

Can't see accuracy of model

Dear everyone,
Nice to meet you

I'm training my custom model, but, can't see accuracy of model, only could see loss value on Terminal
I'm attaching some hint 👍
Thanks

Epoch 1/20
2023-06-30 10:32:50.974378: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:424] Loaded cuDNN version 8900
1079/1079 - 21s - loss: 0.2431 - 21s/epoch - 19ms/step
Epoch 2/20
1079/1079 - 10s - loss: 0.2135 - 10s/epoch - 10ms/step
Epoch 3/20
1079/1079 - 10s - loss: 0.2003 - 10s/epoch - 10ms/step
Epoch 4/20
1079/1079 - 10s - loss: 0.2163 - 10s/epoch - 10ms/step
Epoch 5/20
1079/1079 - 11s - loss: 0.2038 - 11s/epoch - 10ms/step
Epoch 6/20
1079/1079 - 11s - loss: 0.2039 - 11s/epoch - 10ms/step
Epoch 7/20
1079/1079 - 11s - loss: 0.1953 - 11s/epoch - 10ms/step
Epoch 8/20
1079/1079 - 11s - loss: 0.1911 - 11s/epoch - 10ms/step
Epoch 9/20
1079/1079 - 11s - loss: 0.1925 - 11s/epoch - 10ms/step
Epoch 10/20
1079/1079 - 11s - loss: 0.1881 - 11s/epoch - 10ms/step
Epoch 11/20
1079/1079 - 11s - loss: 0.1841 - 11s/epoch - 10ms/step
Epoch 12/20
1079/1079 - 11s - loss: 0.1810 - 11s/epoch - 10ms/step
Epoch 13/20
1079/1079 - 11s - loss: 0.1782 - 11s/epoch - 10ms/step
Epoch 14/20
1079/1079 - 11s - loss: 0.1802 - 11s/epoch - 10ms/step
Epoch 15/20
1079/1079 - 11s - loss: 0.1821 - 11s/epoch - 10ms/step
Epoch 16/20
1079/1079 - 11s - loss: 0.1784 - 11s/epoch - 10ms/step
Epoch 17/20
1079/1079 - 11s - loss: 0.1811 - 11s/epoch - 10ms/step
Epoch 18/20
1079/1079 - 11s - loss: 0.1767 - 11s/epoch - 10ms/step
Epoch 19/20
1079/1079 - 11s - loss: 0.1742 - 11s/epoch - 10ms/step
Epoch 20/20
1079/1079 - 11s - loss: 0.1748 - 11s/epoch - 10ms/step

Incorrect Shape of Kernel?

Hello,

I'm running into errors regarding the shape of the kernel for the complex convolution. In the ComplexConv class you have:

image

Say the number of filters is 8, each with 2x2 shape and the input size is 10x10x2. Then self.kernel_shape will be 2x2x1x8. However the initializer is generating something of shape 2x2x1x16 (real and complex). Shouldn't kernel shape also be 2x2x1x16, for the purposes of passing into self.add_weight?

Misleading instructions for ComplexConv2D input shape?

Hi there,

First off I would just like to say thank you so much for this repo; it's been amazingly helpful!

That being said, I've run into a little snag when specifiying the input shape for a ComplexConv2D layer.

In the source code for this layer, the help section states the following (https://github.com/JesperDramsch/keras-complex/blob/master/complexnn/conv.py#L729-L733):

"When using this layer as the first layer in a model,
provide the keyword argument input_shape
(tuple of integers, does not include the sample axis),
e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures
in data_format="channels_last"."

Where I would naturally assume 128 = width, 128 = height, 3 = # of channels. Yet when I attempt to follow these instructions by executing the following code:

model = models.Sequential()

model.add(complexnn.conv.ComplexConv2D(32, (3, 3), activation='relu', padding='same', input_shape=(128, 128, 3)))

I am then met with the following error:

ValueError: Input 0 is incompatible with layer complex_conv2d_11: expected axis -1 of input shape to have value 2 but got shape (None, 128, 128, 3)

The layer consistently insists that the final axis be of value 2 and sure enough when I make input_shape=(128, 128, 2) it works just fine. Analogous versions of the same error/solution occur for ComplexConv1D and ComplexConv3D layers as well.

The insistence of the value 2 in the final axis makes me think it has something to do with splitting the input data up into real and imaginary parts into separate arrays of float dtypes. This is somewhat confusing to me though, as I was under the impression the input was meant to be specified as an array of (width, height, # of channels) where width and height were complex dtypes? If this isn't the case, and the data needs to be split into real and imaginary components, then how (if at all) is the number of channels specified in the ComplexConv2D layer?

In summary, I was just hoping for some clarification on the following:

  1. Should the input data be transformed into separate arrays of real and imaginary float dtypes? Or can it be kept as a single array of complex dtypes?

  2. How exactly should the input_shape be specified (particularly when it comes to the number of channels the data has) ?

My apologies if I'm missing something trivial or if this has been answered before!

Error instantiate ComplexConv2D

import complexnn

import keras
from keras import models
from keras import layers
from keras import optimizers

model = models.Sequential()

model.add(complexnn.conv.ComplexConv2D(32, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 3), data_format='channels_last'))
model.add(complexnn.bn.ComplexBatchNormalization())
model.add(layers.MaxPooling2D((2, 2), padding='same'))

model.compile(optimizer=optimizers.Adam(), loss='mse')

Using the above example, I am getting

ValueError: Input 0 is incompatible with layer complex_conv2d_8: expected axis -1 of input shape to have value 2 but got shape (None, 28, 28, 3)

Is this only supposed to work when the channel shape is 2? I see that https://github.com/JesperDramsch/keras-complex/blob/master/complexnn/conv.py#L732 has documentation for RGB images.

ValueError with complex_conv3d_1

I am now trying to use Keras-complex on an autoencoder for one image compression and reconstruction project.
In my code:

**def encoder(x):
    x = complexnn.conv.ComplexConv3D(filters=2, kernel_size=(3, 3, 2), padding='same', data_format='channels_first',
                                     input_shape=(1, 32, 32, 2))(x)**

as the first layer of my model, where x is predefined with
**image_tensor = Input(shape=(img_channels, img_height, img_width, img_RI))**
The Error report I got:
ValueError: Input 0 is incompatible with layer complex_conv3d_1: expected axis 1 of input shape to have value 0 but got shape (None, 1, 32, 32, 2)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.