Code Monkey home page Code Monkey logo

multiturnresponseselection's Introduction

Douban Conversation Corpus

Data set

We release Douban Conversation Corpus, comprising a training data set, a development set and a test set for retrieval based chatbot. The statistics of Douban Conversation Corpus are shown in the following table.

Train Val Test
session-response pairs 1m 50k 10k
Avg. positive response per session 1 1 1.18
Fless Kappa N\A N\A 0.41
Min turn per session 3 3 3
Max ture per session 98 91 45
Average turn per session 6.69 6.75 5.95  
Average Word per utterance 18.56 18.50 20.74 

The test data contains 1000 dialogue context, and for each context we create 10 responses as candidates. We recruited three labelers to judge if a candidate is a proper response to the session. A proper response means the response can naturally reply to the message given the context. Each pair received three labels and the majority of the labels was taken as the final decision.


As far as we known, this is the first human-labeled test set for retrieval-based chatbots. The entire corpus link https://www.dropbox.com/s/90t0qtji9ow20ca/DoubanConversaionCorpus.zip?dl=0

Data template

label \t conversation utterances (splited by \t) \t response

Source Code

We also release our source code to help others reproduce our result. The code has been tested under Ubuntu 14.04 with python 2.7.

Please first run preprocess.py and edit the code with the correct path, and it will give you a .bin file. After that, please run SMN_Last.py with the generated .bin file, and the training loss will be printed on the screen. If you set the train_flag = False, it will give your predicted score with your model.

Some tips:

The 200-d word embedding is shared at https://1drv.ms/u/s!AtcxwlQuQjw1jF0bjeaKHEUNwitA . The shared file is a list has 3 elements, one of which is a word2vec file. Please Download it and replace the input path (Training data) in my scripy.

Tensorflow resources:

The tensorflow code requires several data set, which has been uploaded on the following path:

Resource file: https://1drv.ms/u/s!AtcxwlQuQjw1jGn5kPzsH03lnG6U

Worddict file: https://1drv.ms/u/s!AtcxwlQuQjw1jGrCjg8liK1wE-N9

Requirement: tensorflow>=1.3

Reference

Please cite our paper if you use the data or code in this repos.

Wu, Yu, et al. "Sequential Matching Network: A New Archtechture for Multi-turn Response Selection in Retrieval-based Chatbots." ACL. 2017.

multiturnresponseselection's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiturnresponseselection's Issues

Input vocabulary size?

If I'm correct the I obtain a vocabulary size larger than 600.000 distinct words from the Ubuntu dataset. That number seems very big. Do you limit the vocabulary size for the model?

Tokenize method

How did you tokenize the raw Douban corpus? I'm trying to test my model with new data, I need to tokenize them firstly.

Error in SMN_Dynamic

I am having an issue with SMN_Dynamic

pygpu.gpuarray.GpuArrayException: ('mismatched shapes', 2)
Apply node that caused the error: GpuDot22(GpuReshape{2}.0, <GpuArrayType<None>(float32, matrix)>)
Toposort index: 756
Inputs types: [GpuArrayType<None>(float32, matrix), GpuArrayType<None>(float32, matrix)]
Inputs shapes: [(2000, 200), (100, 50)]
Inputs strides: [(40000, 4), (200, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[GpuReshape{3}(GpuDot22.0, MakeVector{dtype='int64'}.0)]]

I'm calling train with 200 for the hidden size and embedding:
train(datasets,wordvecs.W,batch_size=200,max_l=max_word_per_utterence ,hidden_size=200,word_embedding_size=200,model_name='SMN_Dynamic.bin')
Can you help please ?

Where is the training set?

We release Douban Conversation Corpus, comprising a training data set, a development set and a test set

I can only see the test set (test.txt) and a small sample of the training set. Where is the rest of the training (and validation) set?

Could you please provide the binary train/test you used in your paper

Hi Wu,

We tried to carefully replicate your work in tensorflow and the performances are awesome, however, our model is still 3-4% lower compared to you reported in your paper.

We suspect the reason is that the word embedding for initialization we used is different to yours. Thus I wonder if you could provide the train/test binary file that you used in your code?

Best wishes,
Xijuan

Tensorflow版本输入错误

在tensorflow版本中,actions从来没有进入到模型训练这一环节。总是把true_utt给response的placeholder。

douban数据

你们的工作对我启发很大,在github上只看到了豆瓣数据的train sample,请问一下完整的数据会发布在哪里,何时发布哦?

请问下,关于response

response candidate r 我理解应该是固定的一句话,现在是把r这整句话一起输进SMN,

如果换成r的id会怎样?

@MarkWuNLP 谢谢!

URL 失效

onedrive上的URL 失效了,不知道能不能解决一下。感谢

请问word2vec.model在哪里

您好,按照您在readme里面写的,修改path后运行PreProcess.py提示错误,找不到word2vec.model,请问在PreProcess.py中用到的word2vec.model,以及其他文件:train.topic、mergedic2.txt等文件在哪里呢?谢谢!

How many epochs does it take to train UDC?

Thank you for sharing the code for your paper. I was wondering how many epochs you trained the model on Ubuntu dialogue corpus for? Was training done on a single GPU? If so, how long would a single epoch take on average?

Questions on running SMN_Last.py / Not work

Hi, Yu,

I followed the instructions in #14 to run SMN_Last.py. But it does not work. As you suggested, I modified the parameter word_embedding_size=200 hidden_size=200 in SMN_Last.py.

The following is the error message. Do you know the reason for this? Thanks!

trainning data 949999 val data 50001
image shape (200, 2, 50, 50)
filter shape (8, 2, 3, 3)
/mnt/scratch/lyang/working/PycharmProjects/NLPIRNNMatchZooQA/src-match-zoo-lyang-dev/matchzoo/conqa/smn_yuwu_acl17/src/CNN.py:233: UserWarning: DEPRECATION: the 'ds' parameter is not going to exist anymore as it is going to be replaced by the parameter 'ws'.
self.output =theano.tensor.signal.pool.pool_2d(input=conv_out_tanh, ds=self.poolsize, ignore_border=True,mode="max")
(200, 50)
[[ -7.18495249e-03 1.15389853e-02 -4.65663650e-03 ..., 1.29073645e-02
5.36817317e-03 -3.73283600e-03]
[ -7.56547710e-04 2.52639169e-03 2.14198747e-03 ..., 1.87613393e-03
3.13413644e-03 -1.80455989e-03]
[ 1.70666858e-02 -1.36987258e-02 2.04090084e-02 ..., 6.40176979e-03
2.26856302e-02 -3.21805640e-02]
...,
[ 1.94880138e-02 1.08157883e-03 7.03368021e-05 ..., 1.80350402e-02
-3.28865572e-03 -1.50815454e-02]
[ 5.96401096e-03 -9.91006641e-04 4.77976783e-03 ..., 2.32200795e-02
3.23144545e-02 -8.65365822e-03]
[ -6.93387402e-04 3.27511476e-03 2.07571058e-03 ..., 1.42423584e-03
3.12527304e-03 8.35354059e-04]]
Traceback (most recent call last):
File "SMN_Last.py", line 422, in
,hidden_size=200,word_embedding_size=200)
File "SMN_Last.py", line 334, in train
train_model = theano.function([index], cost,updates=grad_updates, givens=dic,on_unused_input='ignore')
File "/usr/lib/python2.7/site-packages/theano/compile/function.py", line 326, in function
output_keys=output_keys)
File "/usr/lib/python2.7/site-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/usr/lib/python2.7/site-packages/theano/compile/pfunc.py", line 208, in rebuild_collect_shared
raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')

豆瓣多轮对话

您好,阅读您的文献的时候发现了您开源的多轮对话语聊,请问能不能把训练集和验证集的语聊发我一下?([email protected]) 非常感谢!

请问依赖的版本都是什么

Hi, 我clone下来的代码跑挂了,没弄懂什么原因:

MultiTurnResponseSelection/src/CNN.py:233: UserWarning: DEPRECATION: the 'ds' parameter is not going to exist anymore as it is going to be replaced by the parameter 'ws'.
self.output =theano.tensor.signal.pool.pool_2d(input=conv_out_tanh, ds=self.poolsize, ignore_border=True,mode="max")
Traceback (most recent call last):
File "SMN_Last.py", line 422, in
,hidden_size=100,word_embedding_size=100)
File "SMN_Last.py", line 334, in train
train_model = theano.function([index], cost,updates=grad_updates, givens=dic,on_unused_input='ignore')
File "/usr/lib64/python2.7/site-packages/theano/compile/function.py", line 326, in function
output_keys=output_keys)
File "/usr/lib64/python2.7/site-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/usr/lib64/python2.7/site-packages/theano/compile/pfunc.py", line 208, in rebuild_collect_shared
raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')

这个是theano的版本问题么?

where is the training data

it's a good job, you meant that you have released the douban corpus in the paper, but i couldn't find the training data in this project.

README不够详细

希望README文件能够详细一点,尤其是对于使用到的数据的描述。

在ubuntu数据集上,loss下降到多少?

我训练了2个epoch,loss=0.334, R10_1=0.646, R2_1=0.895,而且奇怪的是后两个指标在训练过程中一直不变。大家训练的loss能下降到多少呢?performance可以达到论文中的数值吗?
我用的tensorflow版本的。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.