ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels (easy/hard) across eight real-life scenarios.
Hello,we are trying to replicate your work, but we haven't obtained the results reported in your paper on the GSM8k easy dataset. Upon reviewing the output files, we noticed that you used the evaluation metric of exact match (EM), where answers such as 4 and 4.0, or 50$ and 50, are considered incorrect. Have you encountered these situations during your work?
Thank you for sharing this wonderful dataset. I note a mismatch issue between question and answer in Line 21-40. It seems to be caused by dataset_generation/hard_questions/hard_agenda.ipynb of In [10]:.
BTW, I am very curious about the detailed process of calling APIs during inference. Looking forward to Benchmark Code being released soon (:
The link in question can be found in the README.md file, and it currently leads to a 404 error page. I was wondering if the raw data of Coffee has been moved to a new location or if it is temporarily unavailable? It would be greatly beneficial for my work to be able to access this dataset.
Could you please assist me by providing an updated link or guiding me on how I could obtain the Coffee raw dataset?
Thank you in advance for your time and help โ it is greatly appreciated!
We greatly appreciate your work and intend to use it as a foundation for our research. I noticed that you've released the external corpus, but haven't provided instructions on how to use the tools mentioned in your paper. I was wondering if you have any plans to release the code that helps control these external tools?