Code Monkey home page Code Monkey logo

facebook-messenger-bot's Introduction

Facebook-Messenger-Bot

The FB Messenger chatbot that I trained to talk like me. The associated blog post.

Overview

For this project, I wanted to train a Sequence To Sequence model on my past conversation logs from various social media sites. You can read more about the motivation behind this approach, the details of the ML model, and the purpose of each Python script in the blog post, but I want to use this README to explain how you can train your own chatbot to talk like you.

Requirements and Installation

In order to run these scripts, you'll need the following libraries.

How You Can Train Your Own

  1. Download and unzip this entire repository from GitHub, either interactively, or by entering the following in your Terminal.

    git clone https://github.com/adeshpande3/Facebook-Messenger-Bot.git
  2. Navigate into the top directory of the repo on your machine

    cd Facebook-Messenger-Bot
  3. Our first job is to download all of your conversation data from various social media sites. For me, I used Facebook, Google Hangouts, and LinkedIn. If you have other sites that you're getting data from, that's fine. You just will have to create a new method in createDataset.py.

  • Facebook Data: Download your data from here. Once downloaded, you should have a fairly large file called messages.htm. It'll be a pretty large file (over 190 MB for me). We're going to need to parse through this large file, and extract all of the conversations. To do this, we'll use this tool that Dillon Dixon has kindly open sourced. You'll go ahead and install that tool by running

    pip install fbchat-archive-parser

    and then running:

    fbcap ./messages.htm > fbMessages.txt

    This will give you all your Facebook conversations in a fairly unified text file. Thanks Dillon! Go ahead and then store that file in your Facebook-Messenger-Bot folder.

  • LinkedIn Data: Download your data from here. Once downloaded, you should see an inbox.csv file. We won't need to take any other steps here, we just want to copy it over to our folder.

  • Google Hangouts Data: Download your data form here. Once downloaded, you'll get a JSON file that we'll need to parse through. To do this, we'll use this parser found through this phenomenal blog post. We'll want to save the data into text files, and then copy the folder over to ours.

    At the end of all this, you should have a directory structure that looks like this. Make sure you rename the folders and file names if yours are different.

  • Discord Data : You can extract your discord chatlogs by using this awesome DiscordChatExporter made by Tyrrrz. Follow its documentation to extract your desired singular chat logs in .txt format (this is important). You can then put them all in a folder named DiscordChatLogs in the repo directory.

  • WhatsApp Data: Make sure you have a cell phone and put it in the US date-format if it is not already (this will be important later when you parse the log file to .csv). You can not use whatsApp web for this purpose. Open the chat you want to send, tap the menu button, tap more, then click "Email Chat". Send the email to yourself and download it to your computer. This will give you a .txt file, to parse it, we'll convert it to .csv. To do this go to this link and enter all the text in your log file. Click export, download the csv file and simply store it in your Facebook-Messenger-Bot folder under the name "whatsapp_chats.csv".

    NOTE: The parser provided in the above link seems to have been removed. If you still have a .csv file in the correct format, you may still use that. Otherwise download your whatsapp chat logs as .txt files and put them all in a folder named WhatsAppChatLogs in the repo directory. createDataset.py will work with these files instead if, and only if, it DOES NOT find a .csv file named whatsapp_chats.csv.

    In case you use .txt chat logs, note that the expected format is-

    [20.06.19, 15:58:57] Loris: Welcome to the chat example
    [20.06.19, 15:59:07] John: Thanks
    

    (OR)

    12/28/19, 21:43 - Loris: Welcome to the chat example
    12/28/19, 21:43 - John: Thanks
    
  1. Now that we have all our conversation logs in a clean format, we can go ahead and create our dataset. In our directory, let's run:

    python createDataset.py

    You'll then be prompted to enter your name (so that the script knows who to look for), and which social media sites you have data for. This script will create a file named conversationDictionary.npy which is a Numpy object that contains pairs in the form of (FRIENDS_MESSAGE, YOUR RESPONSE). A file named conversationData.txt will also be created. This is simply a large text file the dictionary data in a unified form.

  2. Now that we have those 2 files, we can start creating our word vectors through a Word2Vec model. This step is a little different from the others. The Tensorflow function we see later on (in seq2seq.py) actually also handles the embedding part. So you can either decide to train your own vectors or have the seq2seq function do it jointly, which is what I ended up doing.If you want to create your own word vectors though Word2Vec, say y at the prompt (after running the following). If you don't, then that's fine, reply n and this function will only create the wordList.txt.

    python Word2Vec.py

    If you run word2vec.py in its entirety, this will create 4 different files. Word2VecXTrain.npy and Word2VecYTrain.npy are the training matrices that Word2Vec will use. We save these in our folder, in case we need to train our Word2Vec model again with different hyperparameters. We also save wordList.txt, which simply contains all of the unique words in our corpus. The last file saved is embeddingMatrix.npy which is a Numpy matrix that contains all of the generatedword vectors.

  3. Now, we can use create and train our Seq2Seq model.

    python Seq2Seq.py

    This will create 3 or more different files. Seq2SeqXTrain.npy and Seq2SeqYTrain.npy are the training matrices that Seq2Seq will use. Again, we save these just in case we want to make changes to our model architecture, and we don't want to recompute our training set. The last file(s) will be .ckpt files which holds our saved Seq2Seq model. Models will be saved at different time periods in the training loop. These will be used and deployed once we've created our chatbot.

  4. Now that we have a saved model, let's now create our Facebook chatbot. To do so, I'd recommend following this tutorial. You don't need to read anything beneath the "Customize what the bot says" section. Our Seq2Seq model will handle that part. IMPORTANT - The tutorial will tell you to create a new folder where the Node project will lie. Keep in mind this folder will be different from our folder. You can think of this folder as being where our data preprocessing and model training lie, while the other folder is strictly reserved for the Express app (EDIT: I believe you can follow the tutorial's steps inside of our folder and just create the Node project, Procfile, and index.js files in here if you want). The tutorial itself should be sufficient, but here's a summary of the steps.

    • Build the server, and host on Heroku.
    • Create a Facebook App/Page, set up the webhook, get page token, and trigger the app.
    • Add an API endpoint to index.js so that the bot can respond with messages.

    After following the steps correctly, you should be able to message the chatbot, and get responses back.

  5. Ah, you're almost done! Now, we have to create a Flask server where we can deploy our saved Seq2Seq model. I have the code for that server here. Let's talk about the general structure. Flask servers normally have one main .py file where you define all of the endpoints. This will be app.py in our case. This whill be where we load in our model. You should create a folder called 'models', and fill it with 4 files (a checkpoint file, a data file, an index file, and a meta file). These are the files that get created when you save a Tensorflow model.

In this app.py file, we want to create a route (/prediction in my case) where the input to the route will be fed into our saved model, and the decoder output is the string that is returned. Go ahead and take a closer look at app.py if that's still a bit confusing. Now that you have your app.py and your models (and other helper files if you need them), you can deploy your server. We'll be using Heroku again. There are a lot of different tutorials on deploying Flask servers to Heroku, but I like this one in particular (Don't need the Foreman and Logging sections).

  1. Once you have your Flask server deployed, you'll need to edit your index.js file so that the Express app can communicate with your Flask server. Basically, you'll need to send a POST request to the Flask server with the input message that your chatbot receives, receive the output, and then use the sendTextMessage function to have the chatbot respond to the message. If you've cloned my repository, all you really need to do is replace the URL of the request function call with the URL of your own server.

There ya go. You should be able to send messages to the chatbot, and see some interesting responses that (hopefully) resemble yourelf in some way.

Samples

Please let me know if you have any issues or if you have any suggestions for making this README better. If you thought a certain step was unclear, let me know and I'll try my best to edit the README and make any clarifications.

facebook-messenger-bot's People

Contributors

adeshpande3 avatar jarehec avatar rohith1125 avatar totallynotchase avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facebook-messenger-bot's Issues

app.py issue

Hello,
In point 8, while deploying flask server to heroku, after running the "heroku run python app.py deploy" command I am getting an error named "valueerror unsupported pickle protocol 3". but if I change the content of the app.py to some basic code then it runs. I am using python 3.6.2 and windows computer.
Would you please help?
1fs

createDataset.py error

can't run createDataset.py, Installed all the dependencies via pip but when I run the .py I get:

File "createDataset.py", line 51
for index,lines in enumerate(allLines):
^
TabError: inconsistent use of tabs and spaces in indentation

tried to fix it but with no luck, any help?

Blank output during training.

First of all, I would like to thank you for such an amazing repo.
Right now I'm facing an issue which is that I'm not getting any output while training the model.
I'm getting a blank list for every input.
I'm attaching the screenshot so that you can get a better picture of the issue.
screen shot 2018-11-07 at 9 47 22 am

Can you suggest how many dimensions to use for word vectors?

Hi Adit - Great repo! I am trying to use Seq2Seq.py to generate the word vectors, and it asks for the number of dimensions. I was reading somewhere that it is ideal to use somewhere between 300-500 dimensions. Can you suggest how many should be used? Or how many did you use for your runs?

Dictionary creation in CreateDataset - getWhatsappData

I am not sure the reason behind the dictionary in getWhatsAppData function, since you are concatenating sent messages and then receive messages but then taking a loop over it and using the condition firstmessage , that's obvious it is going to be your message since you have concatenated that way

How To Know If The Model Is Trained?

How to know if the model is fully trained? I'm at around 200k iteration, but how would I know its trained?
And I've one more question, does the model learns/improves itself as we keep chatting with it?

Data Dictionary Contentless!

After running createDatabase.py, My conversationData.txt is empty! Can you please guess the reason?

Thank You

FB createDataset.py

Hello,

Would it be possible to see an example, of the way, the script for making the createDataset.py should look like?
I am getting a few errors, when I try to set it up.

I have changed the file name, but I still won't let it pass.

error in Word2Vec.py when running in python 3.6.4

I modified the code to run in 3.x but I'm still getting the following error:
Traceback (most recent call last):
File "Word2Vec.py", line 82, in
pickle.dump(wordList, fp)
TypeError: can't pickle dict_keys objects

wordVectors in seq2seq

Hi, I have question on using wordVectors in seq2seq.py since I have to adapt to my very long sentences dataset (I saw the limited length of sentence is 15 in the createTrainingMatrices function in seq2seq.py, but my datasets are almost short paragraphs^_^). Although the createTrainingMatrices function in seq2seq.py can help to create new vectors for every sentence using index of words, why not using the pre-trained embeddingMatrix.npy produced by word2vec.py? "wordVectors = np.load('models/embeddingMatrix.npy') " has been stated in seq2seq.py, but doesn't been used actually?

"IMPORTANT - The tutorial will tell you to create a new folder where the Node project will lie"

Hello.
I am a bit confused about this part at the start of step number 7. Do I have to keep all the files created upto step 6 and the files created after step 7 following your shared tutorial in the same folder and run the following commands:
git init
git add .
git commit --message "hello world"
heroku create
git push heroku master

Does the folder which will be committed to git should look like this?:
11deds

Same reply for all questions

@adeshpande3 I get the same reply for any question I ask after the training gets over. I've used the same Hyperparamters that were used in Seq2Seq.py code and run 500000 iterations with around 40,000 conversations as my dataset. What would have been the problem? Can u please suggest some idea to incorporate so that I would get a decent reply to my questions?

problem in seq2seq

def idsToSentence(ids, wList):

print "ids", type(ids)
EOStokenIndex = wList.index('<EOS>')
padTokenIndex = wList.index('<pad>')
print "indexes", EOStokenIndex, padTokenIndex
myStr = ""
listOfResponses=[]

for num in ids:

    if (num[0] == EOStokenIndex or num[0] == padTokenIndex):

        myStr = ""
    else:
        myStr = myStr + wList[num[0]] + " "

if myStr:
    listOfResponses.append(myStr)
listOfResponses = [i for i in listOfResponses if i]
print "listof restp", listOfResponses[:10]
return listOfResponses

i never seem to get to the "else" in the loop - the num is alwas and i therefore create no responses - how is that possible?

Invalid outputs

I have followed all the steps as described by you everything is running perfectly but output results are not correct, may be because I have used very small data set. But still for the exact conversation it should give correct result like "hello" in response of "hi" or "fine" in response of "how are you" which I have given as input while training.

Any ideas on: InvalidArgumentError: Assign requires shapes of both tensors to match.

Have gone through this great tutorial (thank you for it!) but am hitting an issue I haven't seen.

On Gunicorn/Flask running on my Heroku server, I get an error saying my tensor shapes don't match:

lhs shape= [52293,48] rhs shape= [52293,112]

Not sure what to do with this, as I trained them using the code as expected in the tutorial.

Do you think this is a training error or a Flask implementation issue? I did update the Py2 code to run on Py3, may have introduced an error. But that was mainly adding parenthesis and cleaning up tabs/spacing - I'm not sure why that would cause different tensor shapes?

Any suggestions would be greatly appreciated.

Log dump below.

2018-05-12T00:36:16.577701+00:00 heroku[web.1]: State changed from crashed to starting
2018-05-12T00:36:51.453212+00:00 heroku[web.1]: Starting process with command `gunicorn app:app`
2018-05-12T00:36:54.374191+00:00 app[web.1]: [2018-05-12 00:36:54 +0000] [4] [INFO] Starting gunicorn 19.8.1
2018-05-12T00:36:54.375117+00:00 app[web.1]: [2018-05-12 00:36:54 +0000] [4] [INFO] Listening at: http://0.0.0.0:45940 (4)
2018-05-12T00:36:54.375270+00:00 app[web.1]: [2018-05-12 00:36:54 +0000] [4] [INFO] Using worker: sync
2018-05-12T00:36:54.380861+00:00 app[web.1]: [2018-05-12 00:36:54 +0000] [8] [INFO] Booting worker with pid: 8
2018-05-12T00:36:54.444401+00:00 app[web.1]: [2018-05-12 00:36:54 +0000] [9] [INFO] Booting worker with pid: 9
2018-05-12T00:36:55.354932+00:00 heroku[web.1]: State changed from starting to up
2018-05-12T00:37:22.145542+00:00 app[web.1]: [2018-05-12 00:37:22 +0000] [8] [ERROR] Exception in worker process
2018-05-12T00:37:22.145573+00:00 app[web.1]: Traceback (most recent call last):
2018-05-12T00:37:22.145575+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
2018-05-12T00:37:22.145577+00:00 app[web.1]:     return fn(*args)
2018-05-12T00:37:22.145579+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
2018-05-12T00:37:22.145581+00:00 app[web.1]:     options, feed_dict, fetch_list, target_list, run_metadata)
2018-05-12T00:37:22.145582+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
2018-05-12T00:37:22.145584+00:00 app[web.1]:     run_metadata)
2018-05-12T00:37:22.145586+00:00 app[web.1]: tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [52293,48] rhs shape= [52293,112]
2018-05-12T00:37:22.145588+00:00 app[web.1]: 	 [[Node: save/Assign_7 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding, save/RestoreV2:7)]]
2018-05-12T00:37:22.145590+00:00 app[web.1]: 
2018-05-12T00:37:22.145592+00:00 app[web.1]: During handling of the above exception, another exception occurred:
2018-05-12T00:37:22.145593+00:00 app[web.1]: 
2018-05-12T00:37:22.145595+00:00 app[web.1]: Traceback (most recent call last):
2018-05-12T00:37:22.145597+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
2018-05-12T00:37:22.145598+00:00 app[web.1]:     worker.init_process()
2018-05-12T00:37:22.145600+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process
2018-05-12T00:37:22.145601+00:00 app[web.1]:     self.load_wsgi()
2018-05-12T00:37:22.145603+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
2018-05-12T00:37:22.145604+00:00 app[web.1]:     self.wsgi = self.app.wsgi()
2018-05-12T00:37:22.145606+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
2018-05-12T00:37:22.145607+00:00 app[web.1]:     self.callable = self.load()
2018-05-12T00:37:22.145609+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
2018-05-12T00:37:22.145611+00:00 app[web.1]:     return self.load_wsgiapp()
2018-05-12T00:37:22.145612+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
2018-05-12T00:37:22.145613+00:00 app[web.1]:     return util.import_app(self.app_uri)
2018-05-12T00:37:22.145615+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
2018-05-12T00:37:22.145617+00:00 app[web.1]:     __import__(module)
2018-05-12T00:37:22.145618+00:00 app[web.1]:   File "/app/app.py", line 40, in <module>
2018-05-12T00:37:22.145620+00:00 app[web.1]:     saver.restore(sess, tf.train.latest_checkpoint('models'))
2018-05-12T00:37:22.145621+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1802, in restore
2018-05-12T00:37:22.145623+00:00 app[web.1]:     {self.saver_def.filename_tensor_name: save_path})
2018-05-12T00:37:22.145624+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
2018-05-12T00:37:22.145626+00:00 app[web.1]:     run_metadata_ptr)
2018-05-12T00:37:22.145627+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
2018-05-12T00:37:22.145629+00:00 app[web.1]:     feed_dict_tensor, options, run_metadata)
2018-05-12T00:37:22.145630+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
2018-05-12T00:37:22.145632+00:00 app[web.1]:     run_metadata)
2018-05-12T00:37:22.145633+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
2018-05-12T00:37:22.145635+00:00 app[web.1]:     raise type(e)(node_def, op, message)
2018-05-12T00:37:22.145636+00:00 app[web.1]: tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [52293,48] rhs shape= [52293,112]
2018-05-12T00:37:22.145638+00:00 app[web.1]: 	 [[Node: save/Assign_7 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding, save/RestoreV2:7)]]
2018-05-12T00:37:22.145640+00:00 app[web.1]: 
2018-05-12T00:37:22.145641+00:00 app[web.1]: Caused by op 'save/Assign_7', defined at:
2018-05-12T00:37:22.145643+00:00 app[web.1]:   File "/app/.heroku/python/bin/gunicorn", line 11, in <module>
2018-05-12T00:37:22.145644+00:00 app[web.1]:     sys.exit(run())
2018-05-12T00:37:22.145646+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 61, in run
2018-05-12T00:37:22.145647+00:00 app[web.1]:     WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
2018-05-12T00:37:22.145649+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 223, in run
2018-05-12T00:37:22.145650+00:00 app[web.1]:     super(Application, self).run()
2018-05-12T00:37:22.145652+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run
2018-05-12T00:37:22.145653+00:00 app[web.1]:     Arbiter(self).run()
2018-05-12T00:37:22.145655+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 203, in run
2018-05-12T00:37:22.145656+00:00 app[web.1]:     self.manage_workers()
2018-05-12T00:37:22.145668+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 545, in manage_workers
2018-05-12T00:37:22.145669+00:00 app[web.1]:     self.spawn_workers()
2018-05-12T00:37:22.145671+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 616, in spawn_workers
2018-05-12T00:37:22.145673+00:00 app[web.1]:     self.spawn_worker()
2018-05-12T00:37:22.145674+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
2018-05-12T00:37:22.145676+00:00 app[web.1]:     worker.init_process()
2018-05-12T00:37:22.145677+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process
2018-05-12T00:37:22.145679+00:00 app[web.1]:     self.load_wsgi()
2018-05-12T00:37:22.145680+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
2018-05-12T00:37:22.145681+00:00 app[web.1]:     self.wsgi = self.app.wsgi()
2018-05-12T00:37:22.145683+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
2018-05-12T00:37:22.145684+00:00 app[web.1]:     self.callable = self.load()
2018-05-12T00:37:22.145686+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
2018-05-12T00:37:22.145687+00:00 app[web.1]:     return self.load_wsgiapp()
2018-05-12T00:37:22.145689+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
2018-05-12T00:37:22.145690+00:00 app[web.1]:     return util.import_app(self.app_uri)
2018-05-12T00:37:22.145692+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
2018-05-12T00:37:22.145695+00:00 app[web.1]:     __import__(module)
2018-05-12T00:37:22.145696+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 971, in _find_and_load
2018-05-12T00:37:22.145698+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
2018-05-12T00:37:22.145699+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
2018-05-12T00:37:22.145701+00:00 app[web.1]:   File "<frozen importlib._bootstrap_external>", line 678, in exec_module
2018-05-12T00:37:22.145702+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
2018-05-12T00:37:22.145704+00:00 app[web.1]:   File "/app/app.py", line 39, in <module>
2018-05-12T00:37:22.145705+00:00 app[web.1]:     saver = tf.train.Saver()
2018-05-12T00:37:22.145707+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1338, in __init__
2018-05-12T00:37:22.145709+00:00 app[web.1]:     self.build()
2018-05-12T00:37:22.145710+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1347, in build
2018-05-12T00:37:22.145712+00:00 app[web.1]:     self._build(self._filename, build_save=True, build_restore=True)
2018-05-12T00:37:22.145713+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1384, in _build
2018-05-12T00:37:22.145715+00:00 app[web.1]:     build_save=build_save, build_restore=build_restore)
2018-05-12T00:37:22.145716+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 835, in _build_internal
2018-05-12T00:37:22.145724+00:00 app[web.1]:     restore_sequentially, reshape)
2018-05-12T00:37:22.145726+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 494, in _AddRestoreOps
2018-05-12T00:37:22.145728+00:00 app[web.1]:     assign_ops.append(saveable.restore(saveable_tensors, shapes))
2018-05-12T00:37:22.145729+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 185, in restore
2018-05-12T00:37:22.145731+00:00 app[web.1]:     self.op.get_shape().is_fully_defined())
2018-05-12T00:37:22.145733+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 283, in assign
2018-05-12T00:37:22.145734+00:00 app[web.1]:     validate_shape=validate_shape)
2018-05-12T00:37:22.145736+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 60, in assign
2018-05-12T00:37:22.145737+00:00 app[web.1]:     use_locking=use_locking, name=name)
2018-05-12T00:37:22.145739+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
2018-05-12T00:37:22.145741+00:00 app[web.1]:     op_def=op_def)
2018-05-12T00:37:22.145742+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
2018-05-12T00:37:22.145744+00:00 app[web.1]:     op_def=op_def)
2018-05-12T00:37:22.145745+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
2018-05-12T00:37:22.145747+00:00 app[web.1]:     self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
2018-05-12T00:37:22.145748+00:00 app[web.1]: 
2018-05-12T00:37:22.145750+00:00 app[web.1]: InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [52293,48] rhs shape= [52293,112]
2018-05-12T00:37:22.145752+00:00 app[web.1]: 	 [[Node: save/Assign_7 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding, save/RestoreV2:7)]]
2018-05-12T00:37:22.145753+00:00 app[web.1]: 
2018-05-12T00:37:22.146397+00:00 app[web.1]: [2018-05-12 00:37:22 +0000] [8] [INFO] Worker exiting (pid: 8)
2018-05-12T00:37:24.191837+00:00 app[web.1]: [2018-05-12 00:37:24 +0000] [9] [ERROR] Exception in worker process
2018-05-12T00:37:24.191861+00:00 app[web.1]: Traceback (most recent call last):
2018-05-12T00:37:24.191863+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
2018-05-12T00:37:24.191866+00:00 app[web.1]:     return fn(*args)
2018-05-12T00:37:24.191867+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
2018-05-12T00:37:24.191869+00:00 app[web.1]:     options, feed_dict, fetch_list, target_list, run_metadata)
2018-05-12T00:37:24.191871+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
2018-05-12T00:37:24.191873+00:00 app[web.1]:     run_metadata)
2018-05-12T00:37:24.191875+00:00 app[web.1]: tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [52293,48] rhs shape= [52293,112]
2018-05-12T00:37:24.191877+00:00 app[web.1]: 	 [[Node: save/Assign_7 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding, save/RestoreV2:7)]]
2018-05-12T00:37:24.191879+00:00 app[web.1]: 
2018-05-12T00:37:24.191881+00:00 app[web.1]: During handling of the above exception, another exception occurred:
2018-05-12T00:37:24.191883+00:00 app[web.1]: 
2018-05-12T00:37:24.191885+00:00 app[web.1]: Traceback (most recent call last):
2018-05-12T00:37:24.191887+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
2018-05-12T00:37:24.191889+00:00 app[web.1]:     worker.init_process()
2018-05-12T00:37:24.191890+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process
2018-05-12T00:37:24.191892+00:00 app[web.1]:     self.load_wsgi()
2018-05-12T00:37:24.191894+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
2018-05-12T00:37:24.191895+00:00 app[web.1]:     self.wsgi = self.app.wsgi()
2018-05-12T00:37:24.191897+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
2018-05-12T00:37:24.191899+00:00 app[web.1]:     self.callable = self.load()
2018-05-12T00:37:24.191901+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
2018-05-12T00:37:24.191902+00:00 app[web.1]:     return self.load_wsgiapp()
2018-05-12T00:37:24.191904+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
2018-05-12T00:37:24.191908+00:00 app[web.1]:     return util.import_app(self.app_uri)
2018-05-12T00:37:24.191910+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
2018-05-12T00:37:24.191912+00:00 app[web.1]:     __import__(module)
2018-05-12T00:37:24.191913+00:00 app[web.1]:   File "/app/app.py", line 40, in <module>
2018-05-12T00:37:24.191915+00:00 app[web.1]:     saver.restore(sess, tf.train.latest_checkpoint('models'))
2018-05-12T00:37:24.191917+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1802, in restore
2018-05-12T00:37:24.191918+00:00 app[web.1]:     {self.saver_def.filename_tensor_name: save_path})
2018-05-12T00:37:24.191920+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
2018-05-12T00:37:24.191922+00:00 app[web.1]:     run_metadata_ptr)
2018-05-12T00:37:24.191923+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
2018-05-12T00:37:24.191925+00:00 app[web.1]:     feed_dict_tensor, options, run_metadata)
2018-05-12T00:37:24.191927+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
2018-05-12T00:37:24.191928+00:00 app[web.1]:     run_metadata)
2018-05-12T00:37:24.191930+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
2018-05-12T00:37:24.191932+00:00 app[web.1]:     raise type(e)(node_def, op, message)
2018-05-12T00:37:24.191933+00:00 app[web.1]: tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [52293,48] rhs shape= [52293,112]
2018-05-12T00:37:24.191935+00:00 app[web.1]: 	 [[Node: save/Assign_7 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding, save/RestoreV2:7)]]
2018-05-12T00:37:24.191937+00:00 app[web.1]: 
2018-05-12T00:37:24.191939+00:00 app[web.1]: Caused by op 'save/Assign_7', defined at:
2018-05-12T00:37:24.191940+00:00 app[web.1]:   File "/app/.heroku/python/bin/gunicorn", line 11, in <module>
2018-05-12T00:37:24.191942+00:00 app[web.1]:     sys.exit(run())
2018-05-12T00:37:24.191943+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 61, in run
2018-05-12T00:37:24.191945+00:00 app[web.1]:     WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
2018-05-12T00:37:24.191947+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 223, in run
2018-05-12T00:37:24.191949+00:00 app[web.1]:     super(Application, self).run()
2018-05-12T00:37:24.191950+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run
2018-05-12T00:37:24.191952+00:00 app[web.1]:     Arbiter(self).run()
2018-05-12T00:37:24.191954+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 203, in run
2018-05-12T00:37:24.191956+00:00 app[web.1]:     self.manage_workers()
2018-05-12T00:37:24.191967+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 545, in manage_workers
2018-05-12T00:37:24.191969+00:00 app[web.1]:     self.spawn_workers()
2018-05-12T00:37:24.191970+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 616, in spawn_workers
2018-05-12T00:37:24.191972+00:00 app[web.1]:     self.spawn_worker()
2018-05-12T00:37:24.191974+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
2018-05-12T00:37:24.191975+00:00 app[web.1]:     worker.init_process()
2018-05-12T00:37:24.191977+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process
2018-05-12T00:37:24.191979+00:00 app[web.1]:     self.load_wsgi()
2018-05-12T00:37:24.191980+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
2018-05-12T00:37:24.191982+00:00 app[web.1]:     self.wsgi = self.app.wsgi()
2018-05-12T00:37:24.191984+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi
2018-05-12T00:37:24.191985+00:00 app[web.1]:     self.callable = self.load()
2018-05-12T00:37:24.191987+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
2018-05-12T00:37:24.191989+00:00 app[web.1]:     return self.load_wsgiapp()
2018-05-12T00:37:24.191990+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
2018-05-12T00:37:24.191992+00:00 app[web.1]:     return util.import_app(self.app_uri)
2018-05-12T00:37:24.191994+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app
2018-05-12T00:37:24.191995+00:00 app[web.1]:     __import__(module)
2018-05-12T00:37:24.191997+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 971, in _find_and_load
2018-05-12T00:37:24.191999+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
2018-05-12T00:37:24.192001+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
2018-05-12T00:37:24.192002+00:00 app[web.1]:   File "<frozen importlib._bootstrap_external>", line 678, in exec_module
2018-05-12T00:37:24.192004+00:00 app[web.1]:   File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
2018-05-12T00:37:24.192005+00:00 app[web.1]:   File "/app/app.py", line 39, in <module>
2018-05-12T00:37:24.192007+00:00 app[web.1]:     saver = tf.train.Saver()
2018-05-12T00:37:24.192009+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1338, in __init__
2018-05-12T00:37:24.192011+00:00 app[web.1]:     self.build()
2018-05-12T00:37:24.192013+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1347, in build
2018-05-12T00:37:24.192014+00:00 app[web.1]:     self._build(self._filename, build_save=True, build_restore=True)
2018-05-12T00:37:24.192016+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1384, in _build
2018-05-12T00:37:24.192018+00:00 app[web.1]:     build_save=build_save, build_restore=build_restore)
2018-05-12T00:37:24.192019+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 835, in _build_internal
2018-05-12T00:37:24.192021+00:00 app[web.1]:     restore_sequentially, reshape)
2018-05-12T00:37:24.192023+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 494, in _AddRestoreOps
2018-05-12T00:37:24.192025+00:00 app[web.1]:     assign_ops.append(saveable.restore(saveable_tensors, shapes))
2018-05-12T00:37:24.192026+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 185, in restore
2018-05-12T00:37:24.192028+00:00 app[web.1]:     self.op.get_shape().is_fully_defined())
2018-05-12T00:37:24.192030+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/ops/state_ops.py", line 283, in assign
2018-05-12T00:37:24.192031+00:00 app[web.1]:     validate_shape=validate_shape)
2018-05-12T00:37:24.192033+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/ops/gen_state_ops.py", line 60, in assign
2018-05-12T00:37:24.192034+00:00 app[web.1]:     use_locking=use_locking, name=name)
2018-05-12T00:37:24.192037+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
2018-05-12T00:37:24.192038+00:00 app[web.1]:     op_def=op_def)
2018-05-12T00:37:24.192040+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
2018-05-12T00:37:24.192042+00:00 app[web.1]:     op_def=op_def)
2018-05-12T00:37:24.192043+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
2018-05-12T00:37:24.192045+00:00 app[web.1]:     self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access
2018-05-12T00:37:24.192047+00:00 app[web.1]: 
2018-05-12T00:37:24.192049+00:00 app[web.1]: InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [52293,48] rhs shape= [52293,112]
2018-05-12T00:37:24.192050+00:00 app[web.1]: 	 [[Node: save/Assign_7 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_rnn_seq2seq/rnn/embedding_wrapper/embedding, save/RestoreV2:7)]]
2018-05-12T00:37:24.192052+00:00 app[web.1]: 
2018-05-12T00:37:24.193093+00:00 app[web.1]: [2018-05-12 00:37:24 +0000] [9] [INFO] Worker exiting (pid: 9)
2018-05-12T00:37:24.811289+00:00 app[web.1]: Traceback (most recent call last):
2018-05-12T00:37:24.811360+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 210, in run
2018-05-12T00:37:24.811770+00:00 app[web.1]:     self.sleep()
2018-05-12T00:37:24.811805+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 360, in sleep
2018-05-12T00:37:24.812141+00:00 app[web.1]:     ready = select.select([self.PIPE[0]], [], [], 1.0)
2018-05-12T00:37:24.812178+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 245, in handle_chld
2018-05-12T00:37:24.812444+00:00 app[web.1]:     self.reap_workers()
2018-05-12T00:37:24.812482+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 525, in reap_workers
2018-05-12T00:37:24.812864+00:00 app[web.1]:     raise HaltServer(reason, self.WORKER_BOOT_ERROR)
2018-05-12T00:37:24.812932+00:00 app[web.1]: gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
2018-05-12T00:37:24.812965+00:00 app[web.1]: 
2018-05-12T00:37:24.812968+00:00 app[web.1]: During handling of the above exception, another exception occurred:
2018-05-12T00:37:24.812969+00:00 app[web.1]: 
2018-05-12T00:37:24.813002+00:00 app[web.1]: Traceback (most recent call last):
2018-05-12T00:37:24.813032+00:00 app[web.1]:   File "/app/.heroku/python/bin/gunicorn", line 11, in <module>
2018-05-12T00:37:24.813199+00:00 app[web.1]:     sys.exit(run())
2018-05-12T00:37:24.813230+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 61, in run
2018-05-12T00:37:24.813414+00:00 app[web.1]:     WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
2018-05-12T00:37:24.813450+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 223, in run
2018-05-12T00:37:24.813710+00:00 app[web.1]:     super(Application, self).run()
2018-05-12T00:37:24.813746+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run
2018-05-12T00:37:24.813937+00:00 app[web.1]:     Arbiter(self).run()
2018-05-12T00:37:24.813970+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 232, in run
2018-05-12T00:37:24.814214+00:00 app[web.1]:     self.halt(reason=inst.reason, exit_status=inst.exit_status)
2018-05-12T00:37:24.814275+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 345, in halt
2018-05-12T00:37:24.814587+00:00 app[web.1]:     self.stop()
2018-05-12T00:37:24.814618+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 393, in stop
2018-05-12T00:37:24.814948+00:00 app[web.1]:     time.sleep(0.1)
2018-05-12T00:37:24.814986+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 245, in handle_chld
2018-05-12T00:37:24.815218+00:00 app[web.1]:     self.reap_workers()
2018-05-12T00:37:24.815253+00:00 app[web.1]:   File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 525, in reap_workers
2018-05-12T00:37:24.815679+00:00 app[web.1]:     raise HaltServer(reason, self.WORKER_BOOT_ERROR)
2018-05-12T00:37:24.815718+00:00 app[web.1]: gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
2018-05-12T00:37:24.926274+00:00 heroku[web.1]: Process exited with status 1
2018-05-12T00:37:24.993939+00:00 heroku[web.1]: State changed from up to crashed

Support for parsing custom corpus

It would be great if you supported parsing a custom corpus that a user can provide, that is already formatted in an argument that is recognizable, and can feed into createDataset.py
For example, if you could pass a text file that looks like:

message: How's it going today?
response: It's going alright
message: What's for dinner tonight?
response: Chicken baked with cheese

"Update" model by expanding training set?

I successfully got a little chatbot up and running using my FB data -- training it took ~ 8 hours (~150,000 records) or so but it's not too bad. It gives a good base.

I'm wondering how I can improve on the results over time. The most obvious way to me is to extract more data from other sources and "update" the model by continuing training on more data. Is there a way to do this without having to retrain from scratch?

When I remove the comment that lets you load in a saved model (I think it's line 202?) and run seq2seq, the initial outputs look like they are training from scratch again - I get [] outputs for the first several thousand runs, and then outputs that look primitive...

Alternatively, is there any possibility of using the trained model in a reinforcement learning process to help it perform better over time? I'm happy to do the research and work on this if you can point me in the right direction!

(Thanks for the great tutorial by the way!!!)

me again

thanks for your help last time!
my model is finally done training (yeay) but not i ran into another error

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [192] rhs shape= [448]
[[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/output_projection_wrapper/basic_lstm_cell/bias"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/output_projection_wrapper/basic_lstm_cell/bias, save/RestoreV2_1)]]

but I dont understand where the tensors are defined :/
dont I need to feed tf.train.Saver() some variables?

What's the training data formatting?

I want to use training data from a different platform but i'm unsure of what the formatting should be for the training data.

is it a json format? etc
if so can i get an example?

Thanks.

pushing python app into heroku problem

I am having this problem while pushing into heroku-

remote: -----> Compressing...
remote: ! Compiled slug size: 605.7M is too large (max is 500M).
remote: ! See: http://devcenter.heroku.com/articles/slug-size
remote:
remote: ! Push failed
remote: Verifying deploy...
remote:
remote: ! Push rejected to appname.

I am not sure how to proceed with these, dependencies are quite big, how'd you did it?

Check whitespace between words/phrases

Hey I started training my bot last night. Looking at some of the items in the wordlist and in the training printouts, I noticed that some of the items appear to be two words concatenated together, without spaces. I wonder if there's a bug in the Facebook parser that is not adding spaces everywhere necessary.

It seems to happen when running python createDataset.py and getting the conversationData.txt.

compilation error in seq2seq.py

@adeshpande3 , I get a compilation error saying:
ValueError: empty range for randrange() (0,-24, -24)
my localBatchSize = 24 and numTrainingExamples=0.
I don't understand why numTrainingExamples is returning zero.
the line 176 'numTrainingExamples = xTrain.shape[0]' returns zero. Can you please help in solving this issue?
thanks in advance

output-node_name

I need to freeze the graph so I can load it into android but I can't do that without knowing the output_node_name. Do you know what t is for this model?

out of memory with GTX980

Hello there!
I completed succesfully the awesome tutorial except the training (Seq2Seq.py) itself: everytime I do that I run out of memory. What can I do to fix that?
I tried to reduce the "batchSize = 24" with no results.
I'm trying 256 dimensions word vectors.
Sorry for the noob question, I'm new in the world of deep learning, especially with GPU processing.

Ubuntu 16.04
GPU: NVIDIA Geforce GTX 980
RAM: 16 GB
Tensorflow-GPU 1.6.0, Python 2.7, Cuda 9.0, cuDNN 7

this is what i get:

Traceback (most recent call last):
  File "Seq2Seq.py", line 228, in <module>
    curLoss, _, pred = sess.run([loss, optimizer, decoderPrediction], feed_dict=feedDict)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run
    run_metadata_ptr)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1137, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1355, in _do_run
    options, run_metadata)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1374, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[295899,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[Node: Adam/update_embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/mul_5 = Mul[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/embedding_rnn_decoder/embedding"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/Adam_1/read, Adam/beta2)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[Node: beta2_power/read/_113 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_262_beta2_power/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op u'Adam/update_embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/mul_5', defined at:
  File "Seq2Seq.py", line 196, in <module>
    optimizer = tf.train.AdamOptimizer(1e-4).minimize(loss)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 369, in minimize
    name=name)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 532, in apply_gradients
    update_ops.append(processor.update_op(self, grad))
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 118, in update_op
    return optimizer._apply_sparse_duplicate_indices(g, self._v)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 798, in _apply_sparse_duplicate_indices
    return self._apply_sparse(gradient_no_duplicate_indices, var)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/adam.py", line 199, in _apply_sparse
    lambda x, i, v: state_ops.scatter_add(  # pylint: disable=g-long-lambda
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/training/adam.py", line 187, in _apply_sparse_shared
    v_t = state_ops.assign(v, v * beta2_t, use_locking=self._use_locking)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 780, in _run_op
    return getattr(ops.Tensor, operator)(a._AsTensor(), *args)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 934, in binary_op_wrapper
    return func(x, y, name=name)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 1161, in _mul_dispatch
    return gen_math_ops._mul(x, y, name=name)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 2789, in _mul
    "Mul", x=x, y=y, name=name)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3271, in create_op
    op_def=op_def)
  File "/home/hunterwolf/anaconda3/envs/tensorflow27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1650, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[295899,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[Node: Adam/update_embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/mul_5 = Mul[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/embedding_rnn_decoder/embedding"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](embedding_rnn_seq2seq/embedding_rnn_decoder/embedding/Adam_1/read, Adam/beta2)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[Node: beta2_power/read/_113 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_262_beta2_power/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Any other way to check without messenger integration

Is there any way to check what will be the responses for some input sentences without using the messenger. How can we get the responses without 7,8,9 no steps. I am trying to say if you would like to show some simple way to just check what will be the responses if we give some sentences as input.

Multiple syntax errors when trying to run "Word2Vec.py"

I have 0 clue why it refuses to run. tried running it with all the requirements in a python 3.6 environment in anaconda and a 2.7 environment. Everything appears to match up, and honestly I'm thoroughly stumped being slightly new to python. Can't tell if there's anything missing either, but for comparative reasons I'll paste the whole thing.

import tensorflow as tf
import numpy as np
import re
from collections import Counter
import sys
import math
from random import randint
import pickle
import os

wordVecDimensions = 100
batchSize = 128
numNegativeSample = 64
windowSize = 5
numIterations = 100000

def processDataset(filename):
openedFile = open(filename, 'r')
allLines = openedFile.readlines()
myStr = ""
for line in allLines:
myStr += line
finalDict = Counter(myStr.split())
return myStr, finalDict

def createTrainingMatrices(dictionary, corpus):
allUniqueWords = dictionary.keys()
allWords = corpus.split()
numTotalWords = len(allWords)
xTrain=[]
yTrain=[]
for i in range(numTotalWords):
if i % 100000 == 0:
print ('Finished %d/%d total words' % (i, numTotalWords)
(wordsAfter) == allWords[i + 1:i + windowSize + 1]
(wordsBefore) == allWords[max(0, i - windowSize):i]
wordsAdded = wordsAfter + wordsBefore
for word in (wordsAdded):
xTrain.append(allUniqueWords.index(allWords[i]))
yTrain.append(allUniqueWords.index(word))
return xTrain, yTrain

def getTrainingBatch():
num = randint(0,numTrainingExamples - batchSize - 1)
arr = xTrain[num:num + batchSize]
labels = yTrain[num:num + batchSize]
return arr, labels[:,np.newaxis]

continueWord2Vec = True
if (os.path.isfile('Word2VecXTrain.npy') and os.path.isfile('Word2VecYTrain.npy') and os.path.isfile('wordList.txt')):
xTrain = np.load('Word2VecXTrain.npy')
yTrain = np.load('Word2VecYTrain.npy')
print ('Finished loading training matrices')
with open("wordList.txt", "rb") as fp:
wordList = pickle.load(fp)
print ('Finished loading word list')

else:
fullCorpus, datasetDictionary = processDataset('conversationData.txt')
print ('Finished parsing and cleaning dataset')
wordList = datasetDictionary.keys()
createOwnVectors = raw_input('Do you want to create your own vectors through Word2Vec (y/n)?')
if (createOwnVectors == 'y'):
xTrain, yTrain = createTrainingMatrices(datasetDictionary, fullCorpus)
print ('Finished creating training matrices')
np.save('Word2VecXTrain.npy', xTrain)
np.save('Word2VecYTrain.npy', yTrain)
else:
continueWord2Vec = False
with open("wordList.txt", "wb") as fp:
pickle.dump(wordList, fp)

ValueError

Hi all,
when i try to run seq2seq.py;

Finished loading training matrices
Traceback (most recent call last):
File "seq.py", line 223, in
encoderTrain, decoderTargetTrain, decoderInputTrain = getTrainingBatch(xTrain, yTrain, batchSize, maxEncoderLength)
File "seq.py", line 53, in getTrainingBatch
num = randint(0,numTrainingExamples - localBatchSize - 1)
File "C:\Users\ozun\Anaconda3\envs\tf-gpu\lib\random.py", line 221, in randint
return self.randrange(a, b+1)
File "C:\Users\ozun\Anaconda3\envs\tf-gpu\lib\random.py", line 199, in randrange
raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (0,-12, -12)

InvalidArgumentError: Assign requires shapes of both tensors to match , #2

Hi.
I've already trained a seq2seq model following this brilliant article, however I would like to train it more on new data and would seriously love to avoid training the whole thing again instead. I keep getting this InvalidArgumentError: Assign requires shapes of both tensors to match." error whenever I restore the model and try to retrain it using the normal way of training a model. Any ideas on how to go about this?

fbchat_archive_parser.parser.FacebookDataError

parse_file.zip

Getting error at a time of parsing the message.htm file

fbcap ./messages.htm > fbMessages.txt

Traceback (most recent call last):

File "/Users/coddict/anaconda/bin/fbcap", line 11, in <module>
  load_entry_point('fbchat-archive-parser==1.0.post1', 'console_scripts', 'fbcap')()
File "/Users/coddict/anaconda/lib/python3.6/site-packages/click/core.py", line 722, in __call__
  return self.main(*args, **kwargs)
File "/Users/coddict/anaconda/lib/python3.6/site-packages/click/core.py", line 697, in main
  rv = self.invoke(ctx)
File "/Users/coddict/anaconda/lib/python3.6/site-packages/click/core.py", line 895, in invoke
  return ctx.invoke(self.callback, **ctx.params)
File "/Users/coddict/anaconda/lib/python3.6/site-packages/click/core.py", line 535, in invoke
  return callback(*args, **kwargs)
File "/Users/coddict/anaconda/lib/python3.6/site-packages/fbchat_archive_parser/main.py", line 118, in fbcap
  fbch = parser.parse()
File "/Users/coddict/anaconda/lib/python3.6/site-packages/fbchat_archive_parser/parser.py", line 92, in parse
  self._parse_content()
File "/Users/coddict/anaconda/lib/python3.6/site-packages/fbchat_archive_parser/parser.py", line 117, in _parse_content
  self._process_element(pos, element)
File "/Users/coddict/anaconda/lib/python3.6/site-packages/fbchat_archive_parser/parser.py", line 250, in _process_element

"An unrecoverable parsing error has occurred (missing timestamp data)"
fbchat_archive_parser.parser.FacebookDataError: An unrecoverable parsing error has occurred (missing timestamp data)

Issue with Word2Vec.py

"_pickle.PicklingError: Can't pickle <class 'dict_keys'>: attribute lookup dict_k
eys on builtins failed"

Am sorry about the ignorance, but, can't completely verify the error. Any guidance would be much appreciated. I am running on windows, if that will be an issue here. Please suggest a workaround.

Tensor Flow Error while running the flask server

Getting the below error when starting up the server:.
Please help

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [223,48
] rhs shape= [223,112]

[[Node: save/Assign_7 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/rnn/embedding_wrapper/embed
ding"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_rn
n_seq2seq/rnn/embedding_wrapper/embedding, save/RestoreV2_7)]]

Flask server deployment with Tensorflow error

While I'm at Step 8, I'm trying to deploy my Flask server to Heroku. I'm currently facing a Tensorflow error.

python -c 'import tensorflow as tf; print(tf.__version__)'
1.3.0

Here is the error:

ʕ•ﻌ•ʔ:cg2-flask cyrusgoh$ gunicorn app:app
[2017-09-16 00:03:10 -0700] [9531] [INFO] Starting gunicorn 19.7.1
[2017-09-16 00:03:10 -0700] [9531] [INFO] Listening at: http://127.0.0.1:8000 (9531)
[2017-09-16 00:03:10 -0700] [9531] [INFO] Using worker: sync
[2017-09-16 00:03:10 -0700] [9534] [INFO] Booting worker with pid: 9534
[2017-09-16 00:03:25 -0700] [9534] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker
    worker.init_process()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
    self.load_wsgi()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
    return self.load_wsgiapp()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/local/lib/python2.7/site-packages/gunicorn/util.py", line 352, in import_app
    __import__(module)
  File "/Users/gohloongkoon/Documents/git/cg2-flask/app.py", line 45, in <module>
    saver.restore(sess, tf.train.latest_checkpoint('models/'))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1560, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1124, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
    options, run_metadata)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [192] rhs shape= [448]
         [[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/output_projection_wrapper/basic_lstm_cell/bias"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/output_projection_wrapper/basic_lstm_cell/bias, save/RestoreV2_1)]]

Caused by op u'save/Assign_1', defined at:
  File "/usr/local/bin/gunicorn", line 11, in <module>
    sys.exit(run())
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run
    WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 203, in run
    super(Application, self).run()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 72, in run
    Arbiter(self).run()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 202, in run
    self.manage_workers()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 544, in manage_workers
    self.spawn_workers()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 611, in spawn_workers
    self.spawn_worker()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker
    worker.init_process()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 126, in init_process
    self.load_wsgi()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 65, in load
    return self.load_wsgiapp()
  File "/usr/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/usr/local/lib/python2.7/site-packages/gunicorn/util.py", line 352, in import_app
    __import__(module)
  File "/Users/gohloongkoon/Documents/git/cg2-flask/app.py", line 44, in <module>
    saver = tf.train.Saver()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1140, in __init__
    self.build()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1172, in build
    filename=self._filename)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 688, in build
    restore_sequentially, reshape)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 419, in _AddRestoreOps
    assign_ops.append(saveable.restore(tensors, shapes))
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 155, in restore
    self.op.get_shape().is_fully_defined())
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 274, in assign
    validate_shape=validate_shape)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 43, in assign
    use_locking=use_locking, name=name)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [192] rhs shape= [448]

         [[Node: save/Assign_1 = Assign[T=DT_FLOAT, _class=["loc:@embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/output_projection_wrapper/basic_lstm_cell/bias"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/output_projection_wrapper/basic_lstm_cell/bias, save/RestoreV2_1)]]

[2017-09-16 00:03:25 -0700] [9534] [INFO] Worker exiting (pid: 9534)
[2017-09-16 00:03:26 -0700] [9531] [INFO] Shutting down: Master
[2017-09-16 00:03:26 -0700] [9531] [INFO] Reason: Worker failed to boot.

I can't seem to find the solution. This is the closest and the most similar question I could get — Stackoverflow

In short, problem is InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [192] rhs shape= [448]

How do I reshape rhs shape to 192, so lhs and rhs can have the size?

PS: Solutions I tried in line 42 app.py:

# Load in pretrained model
saver = tf.train.Saver()
# saver = tf.train.Saver(reshape=True)
# saver = tf.train.Saver(tf.global_variables, reshape=True)

getData() for Discord logs

I think to improve the functionality and increase the dataset, the scan should include discord messages too. The user can simply get their discord chat logs from this exporter, much the same way they get their other chatlogs.

I already have some regex ready to parse through the whole log, tried and tested, efficient and fast. The regex collects one or multiple responses (of the given username) for each message (of someone that is not the user). This will obviously work best in direct messages but it can definitely work on servers too.

Let me know if you'd like this feature to be implemented :D @adeshpande3

Seq2seq.py problem

Hello!
I am get the problem in Seq2Seq.py:

Traceback (most recent call last):
File "Seq2Seq.py", line 163, in
numTrainingExamples, xTrain, yTrain = createTrainingMatrices('conversationDictionary.npy', wordList, maxEncoderLength)
File "Seq2Seq.py", line 43, in createTrainingMatrices
decoderMessage[valueIndex + 1] = wList.index('')
UnboundLocalError: local variable 'valueIndex' referenced before assignment

What can be wrong?
Thank you!

File "/app/app.py", line 58, in prediction -- AttributeError: 'NoneType' object has no attribute 'fileno'

Hi all,

Getting 'NoneType' object has no attribute 'fileno' error from app/app.py. And, getting TypeError: 'NoneType' object is not subscriptable error. Any insights/hint on these errors? Advanced thanks for your input on this issue.

Heroku App:
2018-06-17T06:55:58.697306+00:00 app[web.1]: File "/app/app.py", line 58, in prediction
2018-06-17T06:55:58.697308+00:00 app[web.1]: response = pred(str(request.json['message']))
2018-06-17T06:55:58.697317+00:00 app[web.1]: TypeError: 'NoneType' object is not subscriptable

Traceback (most recent call last):
File "c:\Chatbot-Flask-Server-master\app.py", line 67, in
app.run(debug=True)
File "C:\Users\sriranga\AppData\Local\Continuum\anaconda3\lib\site-packages\flask\app.py", line 938, in run
cli.show_server_banner(self.env, self.debug, self.name, False)
File "C:\Users\sriranga\AppData\Local\Continuum\anaconda3\lib\site-packages\flask\cli.py", line 629, in show_server_banner
click.echo(message)
File "C:\Users\sriranga\AppData\Local\Continuum\anaconda3\lib\site-packages\click\utils.py", line 217, in echo
file = _default_text_stdout()
File "C:\Users\sriranga\AppData\Local\Continuum\anaconda3\lib\site-packages\click_compat.py", line 621, in func
rv = wrapper_func()
File "C:\Users\sriranga\AppData\Local\Continuum\anaconda3\lib\site-packages\click_compat.py", line 385, in get_text_stdout
rv = _get_windows_console_stream(sys.stdout, encoding, errors)
File "C:\Users\sriranga\AppData\Local\Continuum\anaconda3\lib\site-packages\click_winconsole.py", line 261, in _get_windows_console_stream
func = _stream_factories.get(f.fileno())
AttributeError: 'NoneType' object has no attribute 'fileno'

Training and Testing Ratio?

I am not able to find the dataset division in your code, can you please point m to the snippet where the data division occurs for training, testing and validation.
I am trying to run this program for Cornell Dataset and need to divide the dataset into separate parts. Also, in the blog posts it's mentioned that we won't need Word2Vec file but I guess the code is not updated to that because word2vec is still required in the seq2seq.py.
Would be great if you can clear the above points.

Add requirements.txt

Hey there, I know you list the required libraries for the project in your README, but it may be helpful to add a requirements.txt as well. This will give structured set of dependencies with version control and also will give developers a simple command to run to install.

Port code to python 3

Python 2 has reached EOL and it's time for this repo to upgrade. I've been working on a basic port for awhile and I'd like to detail my approach-

General

  • encoding='utf-8' will be included as an extra parameter in all open(....) commands
  • print 'string' will be changed to print('string')
  • raw_input (s) will be changed to input()
  • dict.iterItems() will be changed to dict.items()

Tensorflow

We will be using tensorflow 2.1.0, please note there has been drastic changes in this version compared to the one we are using. I'm only focusing on replacing the old commands with their alternatives. The new tensorflow probably offers many improved features and it might even have different behaviours for the existing commands, which I'll not be focusing on.

  • tf.Session() will be replaced with tf.compat.v1.Session()
  • Going to use tf.compat.v1.disable_eager_execution(), without this we cannot use placeholder()
  • tf.random_uniform() will be replaced with tf.random.uniform()
  • tf.truncated_normal() will be replaced with tf.random.truncated_normal()
  • tf.placeholder() will be replaced with tf.compat.v1.placeholder
  • tf.train.GradientDescentOptimizer() will be replaced with tf.compat.v1.train.GradientDescentOptimizer()
  • tf.global_variables_initializer() will be replaced with tf.compat.v1.global_variables_initializer()

*more changes soon

As of now, createDataset.py and Word2Vec.py are completely overhauled with python 3, I am currently working on porting Seq2Seq.py and will be posting my progress and/or hurdles here. Please be on a lookout for that!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.