Code Monkey home page Code Monkey logo

unidecode's Introduction

Unidecode, lossy ASCII transliterations of Unicode text

It often happens that you have text data in Unicode, but you need to represent it in ASCII. For example when integrating with legacy code that doesn't support Unicode, or for ease of entry of non-Roman names on a US keyboard, or when constructing ASCII machine identifiers from human-readable Unicode strings that should still be somewhat intelligible. A popular example of this is when making an URL slug from an article title.

Unidecode is not a replacement for fully supporting Unicode for strings in your program. There are a number of caveats that come with its use, especially when its output is directly visible to users. Please read the rest of this README before using Unidecode in your project.

In most of examples listed above you could represent Unicode characters as ??? or \\15BA\\15A0\\1610, to mention two extreme cases. But that's nearly useless to someone who actually wants to read what the text says.

What Unidecode provides is a middle road: the function unidecode() takes Unicode data and tries to represent it in ASCII characters (i.e., the universally displayable characters between 0x00 and 0x7F), where the compromises taken when mapping between two character sets are chosen to be near what a human with a US keyboard would choose.

The quality of resulting ASCII representation varies. For languages of western origin it should be between perfect and good. On the other hand transliteration (i.e., conveying, in Roman letters, the pronunciation expressed by the text in some other writing system) of languages like Chinese, Japanese or Korean is a very complex issue and this library does not even attempt to address it. It draws the line at context-free character-by-character mapping. So a good rule of thumb is that the further the script you are transliterating is from Latin alphabet, the worse the transliteration will be.

Generally Unidecode produces better results than simply stripping accents from characters (which can be done in Python with built-in functions). It is based on hand-tuned character mappings that for example also contain ASCII approximations for symbols and non-Latin alphabets.

Note that some people might find certain transliterations offending. Most common examples include characters that are used in multiple languages. A user expects a character to be transliterated in their language but Unidecode uses a transliteration for a different language. It's best to not use Unidecode for strings that are directly visible to users of your application. See also the Frequently Asked Questions section for more info on common problems.

This is a Python port of Text::Unidecode Perl module by Sean M. Burke <[email protected]>.

Module content

This library contains a function that takes a string object, possibly containing non-ASCII characters, and returns a string that can be safely encoded to ASCII:

>>> from unidecode import unidecode
>>> unidecode('kožušček')
'kozuscek'
>>> unidecode('30 \U0001d5c4\U0001d5c6/\U0001d5c1')
'30 km/h'
>>> unidecode('\u5317\u4EB0')
'Bei Jing '

You can also specify an errors argument to unidecode() that determines what Unidecode does with characters that are not present in its transliteration tables. The default is 'ignore' meaning that Unidecode will ignore those characters (replace them with an empty string). 'strict' will raise a UnidecodeError. The exception object will contain an index attribute that can be used to find the offending character. 'replace' will replace them with '?' (or another string, specified in the replace_str argument). 'preserve' will keep the original, non-ASCII character in the string. Note that if 'preserve' is used the string returned by unidecode() will not be ASCII-encodable!:

>>> unidecode('\ue000') # unidecode does not have replacements for Private Use Area characters
''
>>> unidecode('\ue000', errors='strict')
Traceback (most recent call last):
...
unidecode.UnidecodeError: no replacement found for character '\ue000' in position 0

A utility is also included that allows you to transliterate text from the command line in several ways. Reading from standard input:

$ echo hello | unidecode
hello

from a command line argument:

$ unidecode -c hello
hello

or from a file:

$ unidecode hello.txt
hello

The default encoding used by the utility depends on your system locale. You can specify another encoding with the -e argument. See unidecode --help for a full list of available options.

Requirements

Nothing except Python itself. Unidecode supports Python 3.7 or later.

You need a Python build with "wide" Unicode characters (also called "UCS-4 build") in order for Unidecode to work correctly with characters outside of Basic Multilingual Plane (BMP). Common characters outside BMP are bold, italic, script, etc. variants of the Latin alphabet intended for mathematical notation. Surrogate pair encoding of "narrow" builds is not supported in Unidecode.

If your Python build supports "wide" Unicode the following expression will return True:

>>> import sys
>>> sys.maxunicode > 0xffff
True

See PEP 261 for details regarding support for "wide" Unicode characters in Python.

Installation

To install the latest version of Unidecode from the Python package index, use these commands:

$ pip install unidecode

To install Unidecode from the source distribution and run unit tests, use:

$ python setup.py install
$ python setup.py test

Frequently asked questions

German umlauts are transliterated incorrectly
Latin letters "a", "o" and "u" with diaeresis are transliterated by Unidecode as "a", "o", "u", not according to German rules "ae", "oe", "ue". This is intentional and will not be changed. Rationale is that these letters are used in languages other than German (for example, Finnish and Turkish). German text transliterated without the extra "e" is much more readable than other languages transliterated using German rules. A workaround is to do your own replacements of these characters before passing the string to unidecode().
Japanese Kanji is transliterated as Chinese
Same as with Latin letters with accents discussed in the answer above, the Unicode standard encodes letters, not letters in a certain language or their meaning. With Japanese and Chinese this is even more evident because the same letter can have very different transliterations depending on the language it is used in. Since Unidecode does not do language-specific transliteration (see next question), it must decide on one. For certain characters that are used in both Japanese and Chinese the decision was to use Chinese transliterations. If you intend to transliterate Japanese, Chinese or Korean text please consider using other libraries which do language-specific transliteration, such as Unihandecode.
Unidecode should support localization (e.g. a language or country parameter, inspecting system locale, etc.)
Language-specific transliteration is a complicated problem and beyond the scope of this library. Changes related to this will not be accepted. Please consider using other libraries which do provide this capability, such as Unihandecode.
Unidecode should automatically detect the language of the text being transliterated
Language detection is a completely separate problem and beyond the scope of this library.
Unidecode should use a permissive license such as MIT or the BSD license.
The maintainer of Unidecode believes that providing access to source code on redistribution is a fair and reasonable request when basing products on voluntary work of many contributors. If the license is not suitable for you, please consider using other libraries, such as text-unidecode.
Unidecode produces completely wrong results (e.g. "u" with diaeresis transliterating as "A 1/4 ")
The strings you are passing to Unidecode have been wrongly decoded somewhere in your program. For example, you might be decoding utf-8 encoded strings as latin1. With a misconfigured terminal, locale and/or a text editor this might not be immediately apparent. Inspect your strings with repr() and consult the Unicode HOWTO.
Why does Unidecode not replace \u and \U backslash escapes in my strings?
Unidecode knows nothing about escape sequences. Interpreting these sequences and replacing them with actual Unicode characters in string literals is the task of the Python interpreter. If you are asking this question you are very likely misunderstanding the purpose of this library. Consult the Unicode HOWTO and possibly the unicode_escape encoding in the standard library.
I've upgraded Unidecode and now some URLs on my website return 404 Not Found.
This is an issue with the software that is running your website, not Unidecode. Occasionally, new versions of Unidecode library are released which contain improvements to the transliteration tables. This means that you cannot rely that unidecode() output will not change across different versions of Unidecode library. If you use unidecode() to generate URLs for your website, either generate the URL slug once and store it in the database or lock your dependency of Unidecode to one specific version.

Some of the issues in this section are discussed in more detail in this blog post.

Performance notes

By default, unidecode() optimizes for the use case where most of the strings passed to it are already ASCII-only and no transliteration is necessary (this default might change in future versions).

For performance critical applications, two additional functions are exposed:

unidecode_expect_ascii() is optimized for ASCII-only inputs (approximately 5 times faster than unidecode_expect_nonascii() on 10 character strings, more on longer strings), but slightly slower for non-ASCII inputs.

unidecode_expect_nonascii() takes approximately the same amount of time on ASCII and non-ASCII inputs, but is slightly faster for non-ASCII inputs than unidecode_expect_ascii().

Apart from differences in run time, both functions produce identical results. For most users of Unidecode, the difference in performance should be negligible.

Source

You can get the latest development version of Unidecode with:

$ git clone https://www.tablix.org/~avian/git/unidecode.git

There is also an official mirror of this repository on GitHub at https://github.com/avian2/unidecode

Contact

Please make sure to read the Frequently asked questions section above before contacting the maintainer.

Bug reports, patches and suggestions for Unidecode can be sent to [email protected].

Alternatively, you can also open a ticket or pull request at https://github.com/avian2/unidecode

Copyright

Original character transliteration tables:

Copyright 2001, Sean M. Burke <[email protected]>, all rights reserved.

Python code and later additions:

Copyright 2024, Tomaž Šolc <[email protected]>

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. The programs and documentation in this dist are distributed in the hope that they will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.

unidecode's People

Contributors

alonbl avatar avian2 avatar browniebroke avatar cheznewa avatar chungy avatar ciesiolka avatar critias avatar dukebody avatar emiham avatar exhuma avatar hugovk avatar iki avatar jdufresne avatar jwilk avatar krzysiekj avatar mango0x45 avatar michamos avatar ovanes avatar pcorpet avatar penguinland avatar sicarrots avatar takluyver avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unidecode's Issues

Soft hyphen

The character u"\xad" (http://www.fileformat.info/info/unicode/char/00ad/index.htm) looks exactly the same as a normal hyphen (http://www.fileformat.info/info/unicode/char/2010/index.htm).

If I try to normalize it to ascii, it ends up as '' however;

 unidecode.unidecode(u"\xad")
 ''

Other nasty characters, like the HORIZONTAL ELLIPSIS (http://www.fileformat.info/info/unicode/char/2026/index.htm) is normalized correctly.

unidecode.unidecode(u"\u2026")
'...'

Can the SOFT HYPHEN also be normalized into an -?

Normalizing fancy characters

Thanks for making the library, it's really helpful in my case for cleaning social media texts.

Here are some cases where the transliteration/conversion was not correct (Version 1.3.2):

>>> from unidecode import unidecode
>>> unidecode("ᕼᗩᑭᑭIᗴᗴ")
'hpokikiIgaga'
>>> unidecode("🇦🇷🇮")
''
>>> unidecode("ωεłł")
'oell'
>>> unidecode("RᗅIPႮ")
'RghoIPP'
>>> unidecode("ғʀᴇᴇ")
"g'REE"

I will update this issue with more examples as I come across. Thanks!


Edit:

It looks like most of the issues is because they are characters of some other scripts, not fancy characters.
So, is there some way to do appearance-based conversion rather than approximate-phonetic conversion?

1.3.0 wheel incorrectly claiming Python 2 support.

Version 1.3.0 dropped support for Python <3.5, include Python 2. But the setup.cfg didn't change the "universal=1" line, so wheels are still being created with names like this: "Unidecode-1.3.0-py2.py3-none-any.whl", which claim Python 2 support when this is not true.

universal = 1

That line should probably also removed, a new 1.3.1 built, and a PyPI"yank" request done on the current 1.3.0, since as long as it is there pip will incorrectly consider it the most recent Python 2 compatible version, instead of the proper 1.2.0?

ə -> to e, instead of @

>>> import unidecode
>>>
>>> unidecode.unidecode("Balabəyov")
'Balab@yov'
>>>

Expected: "Balabeyov", instead of "Balab@yov"

syntax error Unidecode 1.3.5 on python 3.5

Hi, I updated my server today with latest and I now get the following error :

   File "/usr/local/lib/python3.5/dist-packages/unidecode/__init__.py", line 151
     __import__(f'unidecode.{file.stem}', globals(), locals(), ['data']).data,
                                       ^
 SyntaxError: invalid syntax

I am running python 3.5 on debian stretch.
Forcing previous version 1.3.4 solves the issue.
I am not sure if this means that Python 3.5 is not supported anymore on newest version or if there is an error in latest Unidecode version ?

Tironian "et" being changed to numeral, 7

Hi.

I've noticed that the Unicode character, "⁊", a Tironian note standing for Latin "et", is being changed to the numeral, "7". While they look alike, this doesn't seem like the best way to decode it, as it is never used to represent a numeral.

In Old Irish and Latin texts it was used as short-hand for the full word "et", so perhaps it should be replaced with "et". Alternatively, it is still used as an ampersand in modern Irish, so, maybe "&" would be a better replacement option?

Le meas,
Adrian.

Consider dual licensing under Perl Artistic

Hi,

I am one of the maintainers of Apache Airflow. Indirectly we make use of unidecode through a dependency (python-nvd3 -> python-slugify -> unidecode). The Apache Software Foundation considers the GPLv2 incompatible with the APL2.

We didn't find out about the issue until now due to the fact that nvd3 and python-slugify maintain compatible licensed. Python-slugify is MIT licensed which might be incompatible with GPLv2 (I am not a lawyer).

We could ask python-slugify to use an alternative library or implement our own. Both are obviously a hassle as we need to convince package maintainers to switch. Also there is not technical reason to do so for us.

As the original Perl package was also available under the Perl Artistic License, would you consider dual licensing your derivative? That would allows us to continue to use the very useful libraries and just make us very happy :-).

Thanks for your consideration in advance!
Bolke

Some Minor Bugs of Transliteration

  1. 呆 Ai -> Dai (Pronunciation in Chinese)

  2. I found that unidecode will transform all characters to its pronunciation in Chinese even those the text is actually written in Japanese Kanji.

  3. While transforming Chinese characters to their pronunciations, a trailing whitespace is always added to the end of the text. 阿傍 -> A Bang (There's is a trailing whitespace at the end)

  4. While transforming text composed of Chinese characters and Japanese katakana/hiragana, the whitespace is left out. 阿呆の足下使い -> A Ai noZu Xia Shi i -> A Ai no Zu Xia Shi i (And, these are Japanese characters, not Chinese characters)

issues on Transliteration of Japanese Kanji and Chinese Kanji

Thank you for your work.

>>> unidecode("人生は難しいですね。") 'Ren Sheng haNan shiidesune. '

This is a Japanese sentence with Kanji.
Although the kana(like は and しいですね。) is correctly transliterated, Kanji is not correctly transliterated.

In Chinese, 人生 = Ren Sheng , 難 = Nan.
But it should be 人生=jinsei, 難 = muzuka here, since it is a sentence in japanese. Can this problem be solved?

using unidecode with text file

I'm using unidecode to remove accent from french words, it work perfect if the word is declared as string like accented_string ="Málaga" but doesn't work as expected if i read the word from text file!

this code working well

from unidecode import unidecode
accented_string = 'Málaga'
unaccented_string = unidecode.unidecode(accented_string)
print(unaccented_string)

the output is "Malaga"
now i want to do the same, but by reading text file

from unidecode import unidecode
import fileinput
fr_txt="fr.txt"
for name in fileinput.input([fr_txt]):
clean = name.replace("\n", "")
unaccented_string = unidecode.unidecode(clean)
print(unaccented_string )

the output is "M',laga" !!! so what's wrong?
i tried also this code

import io
from unidecode import unidecode
fr_txt="txt.txt"
f = io.open(fr_txt, mode="r", encoding="utf-8")
f.read()
for name in f:
clean = name.replace("\n", "")
line = unidecode(clean)
print(line)

the output is "M',laga" !!!
any idea?

import io
from unidecode import unidecode
fr_txt="1.txt"
f = io.open(fr_txt, mode="r", encoding="utf8")
f.read()
for name in f:
clean = name.replace("\n", "")
line = unidecode(clean)
print(line)

Unidecode relative import

Unidecode cannot currently be moved/packaged inside another project, as it uses absolute __import__ in __init__.py.

The easy fix is to set level=x where x is the recursion level from the top package of the parent project, in my case it had to be level=2.

Could you try to make unidecode more "packaging-friendly"?

Thank you for this great library!

License issue

Hi,
Planning to use the library
licensed under GNU Library General Public License v2
we are not doing any modifications in the source code of the library
GPL, or our application can be made proprietary
can you please more insights into this...

Regards
Manju

Faulty transliteration of half-width Katakana with dakuten and handakuten

Example code to test japanese letters in Hiragana, full-width Katakana and half-width Katakana using dakuten and handakuten:

import unidecode as ud

hiragana    = "はひほへほ ばびぶぼべ ぱぴぷぺぽ"
katakana_fw = "ハヒフヘホ バビブベボ パピプペポ"
katakana_hw = "ハヒフヘホ バビブベボ パピプペポ"

print(ud.unidecode(hiragana))
print(ud.unidecode(katakana_fw))
print(ud.unidecode(katakana_hw))

The result for Hiragana and Katakana full width is correct, however the result for half-width Katakana is not:

hahihoheho babibubobe papipupepo
hahihuheho babibubebo papipupepo
hahihuheho ha:hi:hu:he:ho: ha;hi;hu;he;ho;

Instead of the correct transliteration "ba" it returns "ha:" and instead of "pa" it returns "ha;".

Add a Homepage to the pypi listing

👋 Could you add this repo as the "homepage" for the pypi listing for unidecode? https://pypi.org/project/Unidecode/

Spotted an issue in a Dependabot pull request upgrading unidecode where the dependency url goes to the wrong repo: https://github.com/kmike/text-unidecode

Example pull request: metabrainz/listenbrainz-server#1219

For reference, Dependabot tries to find the "homepage" for the dependency from the pypi json response, first looking at any homepage entries and if none exist scanning the description for any source repo links. In this case the description for Unidecode on pypi includes a link to text-unidecode before the correct github link under the license section Unidecode should use a permissive license such as MIT or the BSD license..

Suggested changes, fixes and updates to Hebrew transliteration

I would like to ask for @alonbl feedback/greenlight before preparing my PR. I am interested in addressing several issues I see in the current Hebrew transliteration:

  1. 05ef (triple yod)- can now be transliterated as YYY
  2. seems inconsistent to me to have raffe as - and dagensh to '. if we are going by the graphics then dagesh should be . (dot). but i think a more useful choice would be to ignore both of them (as is currently done for the Shin-dots)
  3. Better alignment with Hebrew Language Academy rules (https://hebrew-academy.org.il/wp-content/uploads/taatik-ivrit-latinit-1-1.pdf):
    a. 05d7 ח is never transliterated as KH - a more standard-compliant version would be H (to differ from h) or h
    b. it is inconsistent to transliterate א as A and ע as back-tic. ע could be A or 'A or A'. but mind you all these choices including for א are non standard. also back-tic for ע is from the "exact standard", but we are otherwise following here the "simple standard" which uses '. I am really not sure what is the right thing to do here. we could also follow other languages and use the letter name in these cases: ALEPH and AYIN.
    c. using @ for schwa is consistent with the IPA symbol but it is not useful and not part of the hebrew standard which ignores schwa in transliteration (or in some cases uses e)
    d. ק should be k as in the simple standard (q is used in the exact standard)
  4. i am not sure what are 05f5, 05f6, 05f7 as they are not part of unicode afaict
  5. fixes in hebrew presentation forms (https://www.unicode.org/charts/PDF/UFB00.pdf)
    a. fb4f should be EL not l
    b. fb4e should be f not p
    c. fb4d should be KH not k
    d. fb4c should be v not b
    e. fb4b should be o not vo, similarly fb1d should by i not yi
    f. fix eg sh, ts to be SH, TS as done in regular letters
    g. fb47 should be k not ts (this is a mistake)
    h. fb41 should be s not n (this is a mistake)
    i. fb3e should be m not l (this is a mistake)
    j. fb30 currently missing should be i
    k. fb27 should be r not m (this is a mistake)
    l. add fb21, fb20 similar to the choices decided on for regular א, ע
  6. graphically sof-pasuk looks like : but for nlp tasks would be more useful to use "." or even ". " as this is the meaning of the punctuation.

PyPI Wheels include `__init__.pyi`

The PyPI wheels contain __init__.pyi (checked 1.3.6). Is this intended? This commit removed __init__.pyi two years ago: f877539. It looks like the file is for type annotations, but type annotations have since been added in __init__.py. Thank you!

Unicode spaces should be converted to ASCII 32

Current behavior:

unidecode("a"+chr(2000)+"b") == "a[?]b"

Desired behavior:

unidecode("a"+chr(2000)+"b") == "a b"

Rationale

Unicode spaces naturally correspond to the usual ascii spaces.

Test failure with PyPy 7.1.1

The tests in 1.1.0 release fail with PyPy:

======================================================================                                                                             
FAIL: test_encoding_error (tests.test_utility.TestUnidecodeUtility)                                                                                
----------------------------------------------------------------------                                                                             
Traceback (most recent call last):                                                                                                                 
  File "/tmp/portage/dev-python/unidecode-1.1.0/work/Unidecode-1.1.0/tests/test_utility.py", line 43, in test_encoding_error
    self.assertEqual(err, expected)                                      
AssertionError: u'Unable to decode input: invalid utf-8, start: 0, end: 1\n' != 'Unable to decode input: invalid start byte, start: 0, end: 1\n'
                                                                                                                                                   
----------------------------------------------------------------------

y transformed to u?

Hi there !
I hope this is really a bug and not me not being sure of what's going on... Anyway.

Context

  • unidecode==0.4.21
  • python 3.5.2
  • Linux Mint Sonya 18.2 (Not that I think it matters here)

Reproduction of the bug

from unidecode import unidecode
print(unidecode("y"))  # prints y
print(unidecode("ў"))  # prints u

Expected

print(unidecode("ў")) should print y as well

I'll try to see if I can understand the reason why it works this way in the codebase but at least the issue is open...

UTF-16 problems

First of all, thanks for this tool.

I'm trying to decode UTF-16 escaped strings to no avail.

For instance, \u00c1 is converted to A instead of Á:

>>> import unidecode
>>> unidecode.unidecode('\u00c1')
'A'
>>> unidecode.unidecode(u'\u00c1')
'A'

(I've used https://www.branah.com/unicode-converter to verify)

Is there an option to specify the UTF-16 encoding ?

Thanks.

Unexpected behavior in Python 3

I'm trying to understand what's happening here. I have the following text in a file:

"\u003e"

But decoding this has no effect:

dom = open('dom.txt', encoding='utf-8').read()
unidecode(dom)

"\u003e"

But "\u003e" -> ">"

Why is this not decoded as expected?

wrong codepoint for Private Use Area

Hi,

in __init__.py:78: if codepoint > 0xeffff:

Is that the correct codepoint. I can see a private use area 0xe000-0xefff.
Is the above a typo, or am I missing something re: utf-16?

Language parameter?

Major scripts like Latin, Cyrillic and Arabic are used to write many languages. Right now, the lib is implicitly unidecoding all Cyrillic as if it were Russian specifically.

For example:
unidecode('Халид Бешлић, Цеца, Жељко Јоксимовић') # sr

actual:
'Khalid Beshlitsh, Tsetsa, Zheljko Joksimovitsh'

expected:
'Halid Beslic, Ceca, Zeljko Joksimovic'

That's because there should be an implicit initial conversion to Latin ('Halid Bešlić, Ceca, Željko Joksimović'). Passing that to unidecode works as expected.

Similar is true for Latin. For example:
unidecode('Kadıköy') # tr
unidecode('Schönheimer') # de

actual:
'Kadikoy'
'Schonheimer'

expected:
'Kadikoy'
'Schoenheimer'

The most straightforward fix is an optional parameter lang.

cannot import name 'unidecode'

pip install unidecode

in a new py file

from unidecode import unidecode
unidecode('ko\u017eu\u0161\u010dek')

ImportError: cannot import name 'unidecode' from partially initialized module 'unidecode' (most likely due to a circular import)

Weak copyleft licence

Is there any chance this could be released with a weak copyleft licence (e.g. BSD or MIT)?

Support transforming ℉ into something

I'd like to see unidecode turn ℉ into something else, although I'm not sure what the best option for replacing the degree symbol is. degF maybe? Just F?

Cant install with pipenv

Getting following error when trying to instlall with pipenv

Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
  You can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.Hint: try $ pipenv lock --pre if it is a pre-release dependency
Could not find a version that matches Unidecode<0.05,==0.4.21,>=0.04,>=1.0.23
Tried: 0.4.1, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.16, 0.4.17, 0.4.18, 0.4.19, 0.4.20, 0.4.20, 0.4.21, 0.4.21, 1.0.22, 1.0.22, 1.0.23, 1.0.23
There are incompatible versions in the resolved dependencies.

Map ・ and ー

\u30fb and \u30fc are currently both mapped to an empty character. Is there a reason for this?

If not, \u30fc (ー) should probably be mapped to -, and \u30fb (・) to maybe ., although I'm slightly less sure about the latter.

Some Chinese names have unnecessary spaces at the end when transliterating

When trying to transliterate

"马云"
I receive

"Ma Yun " (notice the space in the end) instead of

"Ma Yun"

Here's the code you can use to replicate this issue:

import unittest
import unidecode

class TestStrings(unittest.TestCase):
    def test_replace_non_ascii_letters_with_chinese_name(self):
        self.assertEquals(unidecode.unidecode("马云"), "Ma Yun")

The test fails with the following error:

AssertionError: 'Ma Yun ' != 'Ma Yun'
- Ma Yun 
?       -
+ Ma Yun

Run on Python 3.8.5

EDIT:

Google Translate seems to be doing this with no issue, but perhaps Google Translate has the faulty transliteration. Chinese speakers welcome to correct me.
Screen Shot 2021-05-20 at 4 21 49 PM

Digraphs and trigraphs

Truly arbitrary digraphs and trigraphs are less common in non-Latin scripts, but still happen.

unidecode("Բովանդակություն")

actual:
'Bovandakowt`yown'
expected:
'Bovandakut`yun'

The current behaviour is not necessarily wrong, but a bit undesirable because it breaks the roundtrip expectation ('Zulu' == unidecode(transliterate('Zulu', lang='hy'))).

There may be other cases mentioned in https://en.wikipedia.org/wiki/Digraph_(orthography)#Examples that are not covered. Some of them are working as intended though, because for our purposes, logical digraphs like дж is not a true arbitrary digraph because it is still two chars in Latin and even three in Ascii, so mapping from each char without context still works.

Version 1.3.1 (and 1.3.0) published without python_requires metadata given to PyPI

Version 1.3.1 is Python only, but when it was published the python_requires information did not get sent to PyPI, and therefore Python 2 versions of pip are trying to download it but failing. I don't know how unidecode gets published to PyPI, but tools like twine will send the python_requires information automatically.
Please could version 1.3.1 be yanked and a version 1.3.2 published with the python_requires information available.

Handling of conversions near punctuation

I recently upgraded unidecode, and saw some failing test.

The test in question:

"Pickup 65” TV from Platform 9¾, Kingʹs Cross Station."

The result:

- Pickup 65" TV from Platform 93/4, King's Cross Station.
+ Pickup 65" TV from Platform 9 3/4 , King's Cross Station.

I think that separating the 9 from the 3/4 is a good idea, so as to distinguish it from the possibility of 93 / 4 (which the original is not), however there is also a space placed between the 3/4 and the comma which does not read well.

Not a major issue but probably something that will bug people.

Difference in NFC vs NFD decoding

Unidecode gives different values if your string is encoded in NFC (short form) or NFD mode (where a single letter may be represented by the letter, followed by a separate unicode accent mark).

unidecode.unidecode(unicodedata.normalize('NFC', 'Ѐ')) = 'Ie'
unidecode.unidecode(unicodedata.normalize('NFD', 'Ѐ')) = 'E'

Is this expected behavior?

license of data files

Hi Tomaž,

can you clarify the license of the data/mapping files?
The original Text-Unidecode was released as Clarified Perl Artistic License.
I am currently working on a transliteration library under MIT license and would love to make use of your excellent updates/refinements to the mapping files.

Would you be willing to release your updated mappings under a dual Clarified Perl Artistic / GPL license? I have also contacted the original Perl author, whether he's willing to release his code under Perl Artistic License 2.0...

Sorry for all the back and forth.

Regards,
Johannes

Option to avoid transliterating punctuation marks as regular letters

Always transliterating punctuation marks as regular letters could be an issue for some applications. While the paragraph sign ¶ is transliterated to P, I would like to have an option to treat it as unknown.

(I started this issue following 81f938d and regarding the exotic inverted nun ׆‎ that was changed to be transliterated into n as the regular nun נ. but the inverted one is an editorial/punctuation mark)

Support for Latin Extended-D ?

Hi there,
I find myself in a particuliar situation where I deal with manuscripts transcriptions. Of course, some char are not valid in unicode, but some seems to be from my understanding :

unidecode.x0f1unidecode.x0e6unidecode.x0f0
	unidecode.x0a7unidecode.x0f0unidecode.x0f0unidecode.x0e8unidecode.x0f0unidecode.x0e5unidecode.x0e4
	unidecode.x0a7unidecode.x0f1unidecode.x0f0unidecode.x0f7unidecode.x0e6unidecode.x0e7unidecode.x0e5unidecode.x0ee
	unidecode.x0a7
	unidecode.x0a7
	unidecode.x0a7unidecode.x0f1unidecode.x0f1unidecode.x0f1unidecode.x0f1
	unidecode.x0a7unidecode.x0e8unidecode.x0f1

Should I pull request for x0a7 and x0f7 ?

Feature Request: Add ability to set custom replacements

This is a feature request to add a library function that can be called to set custom character replacement mappings. The library function could be called in a similar manner to as follows:

unidecode.setReplacement('', "(TM)")

after which subsequent calls to unidecode would replace '™' with '(TM)' rather than the default of "(tm).

For my particular use case, this would be useful for standardizing names across data sources. For instance, if one source has the name "Acme™" and the other has the name "Acme", I could make a call to replace all instances of '™' with "", which allows me to easily compare across sources.

However, I could see a large number of other use cases for this as well. For example, in the FAQ, it is mentioned that localization is not supported and that certain german characters translate to english text rather than german phonetics. Having a call like this could allow programmers to make their own "localization" libraries, allowing them to whatever mappings they feel like.

Move to semantic versioning

Traditionally Python Unidecode followed the versioning scheme used by the original Perl module. This doesn't make sense any more because:

  • Perl Text::Unidecode itself switched to a different versioning scheme long ago - http://search.cpan.org/~sburke/Text-Unidecode-1.30/
  • Python Unidecode has now sufficiently diverged from Perl version so that matched version numbers are no longer useful,
  • setuptools (>= 8) began mangling version numbers, yielding confusion because Unidecode 0.04.x now sometimes appears as 0.4.x - pypa/setuptools#302

Next release should be numbered something like 1.0.22.

Input() Method

Hello Sir

As you said in your documentation
I used the code, and the code is working
It successfully decode unicode escape characters.

>> from unidecode import unidecode
>> value="\u0048\u0065\u006c\u006c\u006f"
>> print(unidecode(value))
Hello

Problem

When i used input as a value
unidecode prints the same value as i entered.

>> from unidecode import unidecode
>> value=input("Input: ")
Input: \u0048\u0065\u006c\u006c\u006f
>> print(unidecode(value))
\u0048\u0065\u006c\u006c\u006f

Could not resolve host: www.tablix.org

$ git clone https://www.tablix.org/~avian/git/unidecode.git
Cloning into 'unidecode'...
fatal: unable to access 'https://www.tablix.org/~avian/git/unidecode.git/': Could not resolve host: www.tablix.org

Feature Request: Ignore Ranges of Codepoints

Hello @avian2. Would it be possible to add a feature that would take as input a set of one or more Unicode codepoints' and/or ranges to leave unchanged? This means that unidecode would effectively ignore such ranges of characters and leave them in place while transliterating everything else.

Thanks,
Diogo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.