cooelf / deeputteranceaggregation Goto Github PK
View Code? Open in Web Editor NEWModeling Multi-turn Conversation with Deep Utterance Aggregation (COLING 2018)
Home Page: https://www.aclweb.org/anthology/C18-1317
Modeling Multi-turn Conversation with Deep Utterance Aggregation (COLING 2018)
Home Page: https://www.aclweb.org/anthology/C18-1317
Traceback (most recent call last):
File "PreProcess.py", line 206, in
ParseMultiTurn()
File "PreProcess.py", line 200, in ParseMultiTurn
word2vec = WordVecs(args.pretrained_embedding, vocab, True, True)
File "PreProcess.py", line 114, in init
self.W, self.word_idx_map = self.get_W(word_vecs, k=self.k)
File "PreProcess.py", line 127, in get_W
W[i] = word_vecs[word]
ValueError: could not broadcast input array from shape (200) into shape (2582)
what should i do now.somebody help me out.please
Hello, thank you for sharing the E-commerce dataset!
However, about the pretrained word2vec model, I could only find a script in this repository but it seems insufficient for reproducing the result of your paper. Could I ask the word2vec model you used or could you please tell me how did you pretrain the word embedding model? (the setting, the hyperparameters, the training dataset, etc. )
Hi!
Could you please tell what training speed do you get with this code?
Please can you share the code source you used to reproduce the baseline results on your new dataset ?
您好,我是重庆大学的研究生。在阅读您的这篇文章遇到了一个问题,关于Turns-aware Aggregation Encoding这一部分,对于message和其他句子进行了一个连接方式有一个疑问,下面是具体的疑问:
假如现在的句子连接以前shape是50200(50个词,200维的词向量),那么是不是进过连接以后变成了100200。就是经过连接以后,一个对话从50个词变成了100个词?
对于这个连接是否是这样变化,一直不是很确定,希望您可以得到解答,谢谢啦。
the data about taobao is tokenized, do you have the original data, not tokenized, if you have, could you please give the original data to me. if possible, it is ok to give a url or send to me, my e-mail is [email protected], thank you very much.
ub16c9@ub16c9-gpu:/media/ub16c9/fcd84300-9270-4bbd-896a-5e04e79203b7/ub16_prj/DeepUtteranceAggregation$ bash train.sh
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
image shape (10, 2, 50, 50)
filter shape (8, 2, 3, 3)
/usr/local/lib/python2.7/dist-packages/theano/tensor/nnet/conv.py:98: UserWarning: theano.tensor.nnet.conv.conv2d is deprecated. Use theano.tensor.nnet.conv2d instead.
warnings.warn("theano.tensor.nnet.conv.conv2d is deprecated."
/media/ub16c9/fcd84300-9270-4bbd-896a-5e04e79203b7/ub16_prj/DeepUtteranceAggregation/model.py:515: UserWarning: DEPRECATION: the 'ds' parameter is not going to exist anymore as it is going to be replaced by the parameter 'ws'.
self.output =theano.tensor.signal.pool.pool_2d(input=conv_out_tanh, ds=self.poolsize, ignore_border=True,mode="max")
Traceback (most recent call last):
File "main.py", line 489, in
val_frequency=args.val_frequency)
File "main.py", line 408, in main
givens=dic, on_unused_input='ignore')
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function.py", line 317, in function
output_keys=output_keys)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/usr/local/lib/python2.7/dist-packages/theano/compile/pfunc.py", line 208, in rebuild_collect_shared
raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (shared_var=<TensorType(float32, matrix)>, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')
ub16c9@ub16c9-gpu:/media/ub16c9/fcd84300-9270-4bbd-896a-5e04e79203b7/ub16_prj/DeepUtteranceAggregation$
the url seems like unreachable
It is a bit strange that you concatenate the representation of each utterance/response and the last utterance along the last axis (https://github.com/cooelf/DeepUtteranceAggregation/blob/master/main.py#L327).
Can you give some explanations?
Ps.
If I did not misunderstand your idea, you concatenate the representation of each utterance/response (S_j = [h_{j,1}, h_{j,2}, ..., h_{j,n}]) and the last utterance (S_t = [h_{t,1}, h_{t,2}, ..., h_{t,n}]). What we get is [[h_{j,1}, h_{t,1}], [h_{j,2}, h_{t,2}], ..., [h_{j,n}, h_{t,n}]].
That is, the representations at same timestep is concatenated as one vector. I don't think it is reasonable, since it means little that two words from two distinct sentences are at the same timestep.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.