Code Monkey home page Code Monkey logo

giellalt / lang-lut Goto Github PK

View Code? Open in Web Editor NEW
2.0 24.0 0.0 3.55 MB

Finite state and Constraint Grammar based analysers and proofing tools, and language resources for the Lushootseed language

Home Page: https://giellalt.uit.no

License: Other

Makefile 10.21% Shell 13.91% M4 13.94% Regular Expression 5.06% XML 0.77% YAML 1.88% Text 54.24%
finite-state-transducers constraint-grammar minority-language nlp language-resources proofing-tools giellalt-langs maturity-exper geo-northamerica langfam-salishan

lang-lut's Introduction

The Lushootseed (Southern Puget Sound Salish) morphology and tools

Maturity Lemma count GitHub issues Build Status License Desktop speller download Mobile speller download

This repository contains finite state source files for the Lushootseed language, for building morphological analysers, proofing tools and dictionaries. The data and implementation are licenced under GNU GPL licence, also detailed in the LICENSE. The authors named in the AUTHORS file are available to grant other licencing choices.

Install proofing tools and keyboards for the Lushootseed language by using the Divvun Installer (some languages are only available via the nightly channel).

Download and test speller files

The speller files downloadable at the top of this page (the *.bhfst files) can be used with divvunspell, to test their performance. These files are the exact same ones as installed on users' computers and mobile phones. Desktop and mobile speller files differ from each other in the error model and should be tested separately — thus also two different downloads.

Documentation

Documentation can be found at:

Core dependencies

In order to compile and use Lushootseed language morphology and dictionaries, you need:

To install VislCG3 and HFST, just copy/paste this into your Terminal on Mac OS X:

curl https://apertium.projectjj.com/osx/install-nightly.sh | sudo bash

or terminal on Ubuntu, Debian or Windows Subsystem for Linux:

wget https://apertium.projectjj.com/apt/install-nightly.sh -O - | sudo bash
sudo apt-get install cg3 hfst

or terminal on RedHat, Fedora, CentOS or Windows Subsystem for Linux:

wget https://apertium.projectjj.com/rpm/install-nightly.sh -O - | sudo bash
sudo dnf install cg3 hfst

Alternatively, the Apertium wiki has good instructions on how to install the dependencies for Mac OS X and how to install the dependencies on linux

Further details and dependencies are described on the GiellaLT Getting Started pages.

Downloading

Using Git:

git clone https://github.com/giellalt/lang-lut

Using Subversion:

svn checkout https://github.com/giellalt/lang-lut.git/trunk lang-lut

Building and installation

INSTALL describes the GNU build system in detail, but for most users it is the usual:

./autogen.sh # This will automatically clone or check out other GiellaLT dependencies
./configure
make
(as root) make install

Citing

Rueter, J., Hämäläinen, M., & Alnajjar, K. (2023). Modelling the Reduplicating Lushootseed Morphology with an FST and LSTM. In M. Mager, A. Ebrahimi, & A. Oncevay, et al. (Eds.), Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP) (pp. 40-46). The Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.americasnlp-1.6

@inproceedings{f61050ee279c40a18cf138859332d422,
title = "Modelling the Reduplicating Lushootseed Morphology with an FST and LSTM",
abstract = "In this paper, we present an FST based approach for conducting morphological analysis, lemmatization and generation of Lushootseed words. Furthermore, we use the FST to generate training data for an LSTM based neural model and train this model to do morphological analysis. The neural model reaches a 71.9% accuracy on the test data. Furthermore, we discuss reduplication types in the Lushootseed language forms. The approach involves the use of both attested instances of reduplication and bare stems for applying a variety of reduplications to, as it is unclear just how much variation can be attributed to the individual speakers and authors of the source materials. That is, there may be areal factors that can be aligned with certain types of reduplication and their frequencies.",
keywords = "6121 Languages, 113 Computer and information sciences",
author = "Jack Rueter and Mika H{\"a}m{\"a}l{\"a}inen and Khalid Alnajjar",
year = "2023",
month = jul,
doi = "10.18653/v1/2023.americasnlp-1.6",
language = "English",
pages = "40--46",
editor = "Mager, {Manuel } and Ebrahimi, {Abteen } and {Oncevay, et al.}, {Arturo }",
booktitle = "Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP)",
publisher = "The Association for Computational Linguistics",
address = "United States",
note = "Workshop on Natural Language Processing for indigenous Languages of the Americas ; Conference date: 14-06-2023 Through 14-06-2023",
url = "https://turing.iimas.unam.mx/americasnlp/2023_workshop.html",
```


If you use language data from more than one GiellaLT language, consider citing
[our LREC 2022 article on whole
infra](https://aclanthology.org/2022.lrec-1.125/):

> Linda Wiechetek, Katri Hiovain-Asikainen, Inga Lill Sigga Mikkelsen,
  Sjur Moshagen, Flammie Pirinen, Trond Trosterud, and Børre Gaup. 2022.
  *Unmasking the Myth of Effortless Big Data - Making an Open Source
  Multi-lingual Infrastructure and Building Language Resources from Scratch*.
  In Proceedings of the Thirteenth Language Resources and Evaluation Conference,
  pages 1167–1177, Marseille, France. European Language Resources Association.

If you use bibtex, following is as it is on ACL anthology:

```bibtex
@inproceedings{wiechetek-etal-2022-unmasking,
    title = "Unmasking the Myth of Effortless Big Data - Making an Open Source
    Multi-lingual Infrastructure and Building Language Resources from Scratch",
    author = "Wiechetek, Linda  and
      Hiovain-Asikainen, Katri  and
      Mikkelsen, Inga Lill Sigga  and
      Moshagen, Sjur  and
      Pirinen, Flammie  and
      Trosterud, Trond  and
      Gaup, B{\o}rre",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation
    Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.lrec-1.125",
    pages = "1167--1177"
}
```

lang-lut's People

Contributors

albbas avatar arnikki avatar bbqsrc avatar dylanhand avatar flammie avatar rueter avatar snomos avatar trondtr avatar unhammer avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lang-lut's Issues

Disparity in lang-lut tokeniser vs other analysers due to certain multicharacter symbols

The tokeniser analyser (lang-lut/tools) appears to be missing some incremental element which is present in the other analysers (lang-lut/src). The multicharacter symbols {k̓ʷ|gʷ|y̓|c̓|xʷ|x̌ʷ|kʷ|ƛ̕|t̕|l̕} are declared in the src/fst/phonology.lexc and the src/fst/root.lexc. In fact, they are also declared in the tools/tokenisers/tokeniser-disamb-gt-desc.pmscript file, but for some reason the hfst-tokenise tool does not recognize tokens containing these particular characters.

echo 'bək̓ʷ'| hfst-tokenise -S -g --giella-cg -W $GIELLA_HOME/lang-lut/tools/tokenisers/tokeniser-disamb-gt-desc.pmhfst 
"<bək̓ʷ>"
	"bək̓ʷ" ?

All analyzers in the lang-lut/src branch, however, seem perfectly happy with these multicharacter symbols.

(base) lm7-nkiel1:lang-lut rueter$ hfst-lookup src/analyser-disamb-gt-desc.hfstol 
> bək̓ʷ
bək̓ʷ	bək̓ʷ+Adv+Tot	0.000000

> ^C
(base) lm7-nkiel1:lang-lut rueter$ hfst-lookup src/analyser-gt-desc.hfstol 
> bək̓ʷ
bək̓ʷ	bək̓ʷ+Adv+Tot	0.000000

> ^C
(base) lm7-nkiel1:lang-lut rueter$ hfst-lookup src/analyser-pmatchdisamb-gt-desc.hfst 
hfst-lookup: warning: It is not possible to perform fast lookups with foma format automata.
Using HFST basic transducer format and performing slow lookups
> bək̓ʷ
bək̓ʷ	bək̓ʷ+Adv+Tot	0.000000

> ^C
(base) lm7-nkiel1:lang-lut rueter$ hfst-lookup src/analyser-dict-gt-desc.hfstol 
> bək̓ʷ
bək̓ʷ	bək̓ʷ+Adv+Tot	0.000000

Here is a list of words that are analyzed by the lang-lut/src analysers but fail in the tokeniser.

bək̓ʷ    bək̓ʷ+Adv+Tot    0.000000
ck̓ʷaqid ck̓ʷaqid+Adv+Temp        0.000000
day̓     day̓+Adv+Deg     0.000000
dsc̓aliʔ sc̓aliʔ+N+Sg+Nom+PxSg1   0.000000
gʷəl    gʷəl+Aux+Top    0.000000
hagʷəxʷ hagʷ+Adv+Temp+Clt       0.000000
huyəxʷ  huy+Adv+Temp+Clt        0.000000
kʷi     kʷi+Pron+Dem    0.000000
luƛ̕     luƛ̕+N+Sg+Nom    0.000000
putəxʷ  put+Adv+Clt     0.000000
pədt̕əs  pədt̕əs+N+Sg+Nom 0.000000
sc̓aliʔ  sc̓aliʔ+N+Sg+Nom 0.000000
sc̓aliʔs sc̓aliʔ+N+Sg+Nom+PxSP3   0.000000
sčətxʷəd        sčətxʷəd+N+Sg+Nom       0.000000
tux̌ʷ    tux̌ʷ+Adv+Parenthetic    0.000000
tux̌ʷəxʷ tux̌ʷ+Adv+Parenthetic+Clt        0.000000
xʷiʔ    xʷiʔ+v1+Aux+Neg 0.000000
x̌ʷul̕    x̌ʷul̕+Adv        0.000000
čadəxʷ  čad+Adv+Interr+Clt      0.000000
ɬixʷdat ɬixʷdat+Adv+Temp        0.000000
əlgʷəʔ  əlgʷəʔ+Pron+Pers+Pl3+Nom        0.000000
ƛ̕aƛ̕ac̓apəd       ƛ̕aƛ̕ac̓apəd+N+Sg+Nom      0.000000
čəxʷ    čəxʷ+Pron+Pers+Sg2+Nom  0.000000

The problematic Modifier letters are: U+02B7,
The problematic combining diacritics are: U+0313, U+0315

Tokenization problems with U+0313 and U+0315 in lut

The tokenizer has problems with combinations including either U+0313 (Combining comma above) or U+0315 (Combining comma above right), e.g. qawq̓s.

With my code as:

echo ' qawq̓s ' \
| hfst-tokenise -S -g --giella-cg -W -m $GIELLA_HOME/lang-lut/tools/tokenisers/tokeniser-disamb-gt-desc.pmhfst \
| vislcg3 -g $GIELLA_HOME/lang-lut/src/cg3/disambiguator.cg3 \
| less

the result is:
: qawq̓s \n

the same word will pass with:

hfst-lookup src/analyser-gt-desc.hfstol 
> qawq̓s
qawq̓s	qawq̓s+N+Sg+Nom	0.000000

What is curious is that at present, the problematic symbols are U+0313 and U+0315. BUT there is one exception, and this is the combination of the letter c and U+0313. There is NO problem with nor are there problems with superscript letters.

echo ' bəc̓ac ' \
| hfst-tokenise -S -g --giella-cg -W -m $GIELLA_HOME/lang-lut/tools/tokenisers/tokeniser-disamb-gt-desc.pmhfst
: 
"<bəc̓ac>"
	"bəc̓ac" N Sg Nom
: \n

and

hfst-lookup src/analyser-gt-desc.hfstol 
> bəc̓ac
bəc̓ac	bəc̓ac+N+Sg+Nom	0.000000

Lut-speller compound letters not accepted but suggested

This issue appears to be related to Issue #1 where
all words with multi-character symbols seem to be at issue.
tiʔəʔ [sčətxʷəd] [gʷəl] [x̌ʷul̕] [ƛ̕uʔibibəš].

On MacOS 13.5.1, TextEdit Spelling and Grammar does not accept ‹sčətxʷəd›, but it suggests ‹sčətxʷəd› in the same spelling.

Screenshot 2024-01-29 at 11 32 11

Upper-case letters in foreign names ONLY

The Lushootseed does not have upper-case lettering at the beginning of sentences; upper-case words are foreign words, and this means there should be no automated capitalization in the LibreOffice rendition of Lushootseed. IPA is not subject to case alternation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.