University of Rochester, LIN 250 Course Research Project
To acquire the necessary Python packages to reproduce our results, you can use either conda
or pip
.
After activating the desired virtual environment (use Anaconda's official documentation on virtual environment to create a virtual environment in the Prompt terminal), the necessary packages can be installed using the following command.
$ conda install -c defaults -c stanfordnlp -c conda-forge python=3.8 pandas stanza nltk
We again recommend the use of virtual environment (Python's official documentation on venv
) to avoid any configuration conflicts. After activating the desired virtual environment, the necessary packages can be installed using the following command.
$ pip install pandas stanza nltk
Note: Some systems may default
pip
command to that of Python 2, which will cause the installation to fail. In such a case, switching topip3
typically resolves the issue.
Our implementation makes use of the stanza
package. As part of the installation process, you need to download some data files by first opening a python REPL (accessible by typing the python
or python3
command once the desired virtual environment is activated) and type the following.
>>> import stanza
>>> stanza.download('en')
Our implementation makes use of the Punkt sentence tokenizer provided by the nltk
package. In order for the tokenizer to function, you need to first download some data files using the following command.
$ python -m nltk.downloader punkt
We divided the dataframe generation process into three stages: parse, extract, and classify. To carry out each of the stages, follow the next instructions in order, assuming the desired virtual environment is activated.
During this stage, texts from the COCA corpus are parsed using stanza
with the results stored as Python pickle files. You can generate the parses for texts contained within filename
by using the following command.
$ python parse.py [filename]
You may optionally supply the parameter --directory
to tell parse.py
to store all the generated Python pickle files under a specific directory
.
$ python parse.py --directory [directory] [filename]
After running parse.py
, you should obtain a collection of Python pickle files (i.e., suffix .pkl
) with names set to the COCA text ID (e.g., 4000568.pkl
).
Note: You may only supply one
filename
at a time.
During this stage, parses of COCA texts are examined and potentially gerunds are extracted using our proposed patterns (see paper). The gerund candidates are also labeled with a "recommendation" for whether it should be excluded (e.g., words such as "king"). The specific recommendations for words that we have devised can be found in exclusion.csv
. To extract the potential gerunds from the parse contained in filename
, use the following command.
$ python extract.py exclusion.csv [filename]
You may optionally supply the parameter --output
to tell extract.py
to store the resulting CSV file in a particular output
.
$ python extract.py --output [output] exclusion.csv [filename]
After running extract.py
, you should obtain a CSV file containing the gerund candidates. This file will be used by the classify stage. extracted.csv
is the CSV file obtained during our experiment.
Note: You may supply multiple
filename
s (e.g., UNIX wildcard*.pkl
works).
This stage takes the extracted gerunds from extracted.csv
(or the output file from the extract stage) and categorizes them into: poss-ing-of, poss-ing, ing-of, det-ing, acc-ing, and vp-ing. This is output into a CSV file containing the final dataframe. It can be run with the following command.
python classify.py [directory of parsed pickle files] [extracted csv file] [dataframe csv file]
Note: The directory provided in the first argument must only contain pickle files
For further detail about any stages, reference the README-parse.txt, README-extract.txt, README-classify.txt.