Code Monkey home page Code Monkey logo

tara's People

Contributors

ayyyq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

tara's Issues

AMRBART

I can't wait for you to release code that uses AMRBART!

bert

OSError: Can't load tokenizer for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer.

I have downloaded bert-base-uncased, but I always prompt that I can't find it. And I changed the default of load_model_dir to the path of the file.

Parse AMR

Dear author, thanks to the amazing work and open-sourced code.
I'm puzzled by some of your treatment of the Parser AMR:

  1. The first question is about the transition amr. First, I downloaded the "transition-amr-parser" folder and the "amr_general" folder, as mentioned in TSAR. Then, using the command "pip install transition-amr-parser" to download the required transition amr. However, when I run the "python amrparse.py" command, I get errors "no module of fairseq" and "ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+". For the above two errors, I run the command "pip install fairseq==0.10.2" and the command "pip install urllib3==1.26.15" respectively. However, when I run the "python amrparse.py" command, I still get "KeyError: 'stack_transformer_6x6_nopos' ", which occurs in the "amrparse.py" file with the line "parser = AMRParser.from_checkpoint(path)", seems to be related to the fairseq package. Has the author ever encountered this problem?

  2. The second question is about AMRBART. May I ask if the data needs to be compressed after being parsed into AMR by AMRBART?

Thank u for your patience.

lower performance

Thank you for sharing! However, the results I obtained on the RAMS dataset are lower than the paper. I am wondering if this could be due to virtual environment configuration issues. While running the script you provided, I encountered an error "TypeError: init() got an unexpected keyword argument 'label_smoothing'", which might be caused by a version mismatch in some package. Could you please provide me with details of your environment?
========for span F1========
dev

Precision: 40.9049 Recall: 50.4510 F1: 45.1792

test

Precision: 42.5606 Recall: 50.1748 F1: 46.0551
========for head F1========
dev

Precision: 46.7093 Recall: 57.6099 F1: 51.5901

test

Precision: 49.3327 Recall: 58.1585 F1: 53.3833

Request for Demo Code

Is it possible to provide an example script of how to go from a single raw AMR graph to the simplified representation proposed in the paper?

Inquiry regarding a specific step in your paper

Dear author,
Thank you for this work.
I found the paper well-researched, however, I have a question regarding a specific point. In Section-2.2-Missing spans, the concept of "match" was mentiond, and I would like to seek further explanation to better understand it.

If a generated span partially matches a node, we add a new node to represent this span and inherit connectives from the partially matched node. We also add a special edge between this node and the new node to indicate their overlap. If a generated span fails to match any existing nodes, we add a new node and connect it to the nearest nodes to its left and right with a special edge.

Or could you kindly provide guidance on which part of the code corresponds to the aforementioned steps?
Thank you very much for considering my inquiry.

lower performance than the paper results

Hi.
Thanks for your code! According to the paper, results using AMRBART on WikiEvents test set based on RoBERTa large is as bellow.
78.35 76.29 73.07 70.83

But I only received low performance as follow:

'========for Head F1 Identification========' 
Precision: 71.9361 Recall: 81.4889 F1: 76.4151
'========for COREF F1 Identification========' 
Precision: 70.6927 Recall: 80.0805 F1: 75.0943
'========for Head F1 Classification========' 
Precision: 67.1403 Recall: 76.0563 F1: 71.3208
'========for COREF F1 Classification========' 
Precision: 66.0746 Recall: 74.8491 F1: 70.1887

It puzzled me. I set the same hyperparameters according to the appendix and use the preprocessed data provided by you . Did I do something wrong?

Thanks for reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.