asknowqa / sqg Goto Github PK
View Code? Open in Web Editor NEWQuery Generation for Question Answering over Knowledge Bases
License: GNU General Public License v3.0
Query Generation for Question Answering over Knowledge Bases
License: GNU General Public License v3.0
The following input to SQG freezes it with CPU consumption at 100%
Why does the sample curl command in the README have blank surface field?
There are cases like below that are the same queries but with different name for the variables:
?u_0 <http://dbpedia.org/property/awards> <http://dbpedia.org/resource/Goethe_Prize> .?u_0 <http://dbpedia.org/property/awards> ?u_1
and
?u_1 <http://dbpedia.org/property/awards> <http://dbpedia.org/resource/Goethe_Prize> .?u_1 <http://dbpedia.org/property/awards> ?u_0
Graph.to_where_statement should not return queries that result in no answer from the KB, except for boolean queries.
Creating a Requirements(dependencies) text file at the root of the code would help a lot, specially to dockerize the code
Listing all the Python packages that SQG uses
Hello.
I tried to run this codes but I found I don't have some files for running, though I downloaded the dataset, model, and bloom filter which you wrote in README.
For example, in orchestrator.Orchestrator.rank, it requires various files for running it.
Some files, for example %s.pt (maybe lcquad.pt?) can be downloaded from your google drive.
And I know stanford-parser.jar is stanford nlp parser file.
But other cases, for example dataset.vocab, dataset_embed.pth are unknown files, I think.
I found build_vocab function at learning/lstm/scripts/preprocess-lcquad.py,
But I'm not sure it is directly related to dataset.vocab file.
About dataset_embed.pth, I can't find any related codes in this repository.
Would you share those files for me, or tell how to generate those files?
Thanks for reading.
The port on which SQG listens should be configurable via a config file or command line flags.
The current implementation takes a long time in fetching the answer due to 2-hop queries from the knowledge base. Due to this delay this module can not be used in a real-time human-facing QA system. I would request the author to move 2-hop queries to blooms as well.
It would be nice if you could return the question type in your api response, like count, boolean, list (array of uris), resource (single uri return). Since you already have a query classifier and the views on the web differ based on the query type this would be a useful return to have from your api.
Hi @hamidzafar
I am trying to replicate your repository locally but I am not able to generate lc_quad_gold.json since the SPARQL endpoint you are using seems to be unavailable. Would it be possible to share this generated dataset so that I would be able to replicate your work.
Thanks
If dbpedia is not accessible please throw an error with a message that explains to the user that the KB is unreachable.
It would be nice for the SQG api to accept a timeout parameter, something like this:
Hi,
I am trying to replicate your repository locally and I am trying to run the query_gen.py file by setting linker=1. However, it seems that the required file "data/LC-QUAD/EARL/output.json" is missing. Could you tell me how to generate that file so that I can replicate your result?
Thanks
Hi,
I tried out SQG but realized that there should be two bloom files, spo1.bloom
and spo2.bloom
. However, they are missing in the master branch.
Can you provide them or explain how to create them?
Thanks!
Pretrained question classifier missing from Google drive link
I have followed the instructions to run SQG with all the dependencies (with all the noted versions) and it seems to run smoothly and without any problem.
I am trying to run the example call you have provided in the README file but the response is returned as {}
. I have played around with the query and tried
{
"question":"What is the birth place of Barack Obama?",
"relations":[
{
"surface":"",
"uris":[
{
"confidence":1,
"uri":"http://dbpedia.org/ontology/birthPlace"
}
]
}
],
"entities":[
{
"surface":"",
"uris":[
{
"confidence":1,
"uri":"http://dbpedia.org/resource/Barack_Obama"
}
]
}
],
"kb":"dbpedia"
}
and another query:
{
"question":"Who is the wife of Barack Obama?",
"relations":[
{
"surface":"",
"uris":[
{
"confidence":1,
"uri":"http://dbpedia.org/ontology/spouse"
}
]
}
],
"entities":[
{
"surface":"",
"uris":[
{
"confidence":1,
"uri":"http://dbpedia.org/resource/Barack_Obama"
}
]
}
],
"kb":"dbpedia"
}
I always get the same response (am empty set of queries):
{}
I have checked the errors.log and the info.log files and there is nothing out of the ordinary going on there, plus I am checking the python console for any errors/warnings and there is nothing to see also.
I am running the code on Ubuntu 18.04.1 LTS Desktop with Intel(R) Xeon(R) W-2133 CPU and 64GB of Ram.
If you could point me in the direction of what might be the cause of the problem or something like this?
kb/dbpedia.py does not have the bloom path set in init()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.