In this early pre-version, the library provides :
- datasets to train business-oriented french text models
- a characters normalization pipeline tailored for french text
Install¶
pip install frenchtext
Dependencies¶
Licence¶
APACHE licence 2.0 : https://www.apache.org/licenses/LICENSE-2.0
How to use¶
The detailed documentation for each module is available through the menu on the left side of this page.
You will find below an overview of the library.
French datasets¶
Data sources¶
The text content of the main french websites in the domain of finance and business (+ wikipedia) was extracted in september 2019 using nlptextdoc.
This extraction was done as "politely" as possible:
- extract only freely and publicly available content
- respect the robots.txt directives of each website (pages forbidden for indexing, maximum extraction rate)
- detect when websites use tools to prevent indexing (like Datadome) and abort the crawl
IMPORTANT: The original authors of the websites own the copyright on all text blocks in this dataset.
To be able to link each text block to its original author, we track the origin URL of each text block throughout the whole process.
YOU CAN'T REUSE THE TEXT BLOCKS FOR ANY PURPOSE EXCEPT TRAINING A NATURAL LANGUAGE PROCESSING MODEL.
See the new European copyright rules : European Parliament approves new copyright rules for the internet
"The directive aims to make it easier for copyrighted material to be used freely through text and data mining, thereby removing a significant competitive disadvantage that European researchers currently face."
=> 131 websites and 2 564 755 HTML pages
Data preparation¶
The text blocks were then:
- deduplicated to keep only distinct text blocks for each website (forgetting part of the original document structure),
- tagged (but not filtered) by language (using https://fasttext.cc/docs/en/language-identification.html),
- grouped in categories according to the main theme of the original website,
- split in Pandas dataframes of size < 2 GB.
=> 10 categories: 'Assurance', 'Banque', 'Bourse', 'Comparateur', 'Crédit', 'Forum', 'Institution', 'Presse', 'SiteInfo', 'Wikipedia'
In each dataframe, the text blocks were additionnaly SHUFFLED IN A RANDOM ORDER to make it very difficult to reconstruct the original articles (safety measure to help protect the copyrights of the authors).
The results of this second step can be downloaded in the config.datasets directory, as dataframes serialized in the feather format, in files named according to the 'DatasetFile' column of the datasets table.
=> 19 dataset files: 'assurance', 'banque', 'bourse', 'comparateur', 'crédit', 'forum', 'institution', 'presse-1', 'presse-2', 'presse-3', 'presse-4', 'presse-5', 'presse-6', 'siteinfo', 'wikipedia-1', 'wikipedia-2', 'wikipedia-3', 'wikipedia-4', 'wikipedia-5'
Dataset size¶
The number of words in each text block was computed using the default french tokenizer from spaCy v2.1.
This business-oriented dataset contains 2 billion french words.
Here is a summary of the number of words contributed by each category in millions:
- Assurance : 12
- Banque : 20
- Bourse : 26
- Comparateur : 20
- Crédit : 1
- Forum : 152
- Institution : 4
- Presse : 963
- SiteInfo : 78
- Wikipedia : 727
Dataset files¶
from frenchtext.core import *
from frenchtext.datasets import *
List available dataset files :
datasetfiles = list_dataset_files()
datasetfiles
Source websites and number of words in each dataset file :
datasetsdf = list_datasets()
datasetsdf[["DatasetFile","Url","Pages","Words"]].iloc[80:100]
Download dataset files¶
download_dataset_file("assurance")
download_all_datasets()
You can change the local directory where the dataset files are downloaded :
config.datasets
config["datasets_path"] = "/tmp/datasets"
config.datasets.mkdir(parents=True, exist_ok=True)
config.datasets
Read dataset files¶
datasetdf = read_dataset_file("assurance")
datasetdf
Access text blocks in dataset files¶
Filter and iterate over the rows of a dataset file :
rowsiterator = get_rows_from_datasetdf(datasetdf, minwords=None, maxwords=5, lang="?")
show_first_rows(rowsiterator,10)
Filter and iterate over the text blocks of a full dataset (across multiple files) :
textiterator = get_textblocks_from_dataset("Assurance", minwords=None, maxwords=10, lang="fr")
show_first_textblocks(textiterator,skip=2000,count=10)
Access a specific row :
get_text_from_rowindex(datasetdf,100)
Find text blocks with a specific char or substring :
find_textblocks_with_chars(datasetdf,"rétroviseur",count=20,ctxsize=15)
find_textblocks_with_chars(datasetdf,64257,count=10,wrap=True)
Track the source URL for each text block¶
Optionally download and read urls file to track the origin of each text block :
urlsdf = read_urls_file()
urlsdf.head()
get_text_from_rowindex(datasetdf,100)
get_url_from_rowindex(datasetdf, 100)
Characters normalization pipeline¶
Motivation¶
French datasets often contain several thousands distinct Unicode characters.
Characters stats in Wikipedia dataset :
- 35.6 billion chars
- 13 502 distinct Unicode chars
Characters stats in Business dataset :
- 27.5 billion chars
- 3 763 distinct Unicode chars
We need to reduce the number of distinct characters fed to our natural language processing applications, for three reasons :
- chars considered by the user as visually equivalent will often produce a different application behavior : this is a huge problem for the user experience
- with so many chars, the designer of the NLP application will not be able to reason about all possible combinations : this could harm the explainability of the system
- this huge number of distinct characters brings a significant amount complexity the NLP models will have to deal with
Characters stats in Wikipedia dataset :
- Only 1316 chars more frequent than 1 in 100 million
- 99.9987 % of Wikipedia chars would be preserved if we only kept the frequent chars
Characters stats in Business dataset :
- Only 531 chars more frequent than 1 in 100 million
- 99.9996 % of Business chars would be preserved if we only kept the frequent chars
We can be smarter than that and replace rare chars with equivalent (or mostly equivalent) more frequent chars to preserve a maximum of information.
Target characters set¶
After a detailed study of all the frequent chars, the goal is to design a noramization pipeline which can retain as much information as possible while greatly reducing the number of dinstinct chars.
We saw before that it is possible to preserve 99.9996% of the original chars while keeping only 500 distinct chars. By being clever and replacing equivalent chars, we can divide this number by 2 and still retain the same amount of information.
It may then be useful to limit the number of distinct characters after normalization to 255 distinct characters :
- if needed, french text chars can then be encoded with a single byte
- the list of supported chars can be memorized by NLP application developers and users
from frenchtext.core import *
from frenchtext.chars import *
255 supported characters after normalization :
import pandas as pd
dfcharsnorm = pd.read_csv(chardatadir / "charset-fr.csv", sep=";")
dfcharsnorm
The table below shows the number of chars in each category (after normalization) per 100 million characters :
dfblocks = dfcharsnorm.groupby(by=["Category","SubCategory"]).agg({"Char":["count","sum"],"CountBusiness":"sum"})
dfblocks["CountBusiness"] = (dfblocks["CountBusiness"] / 27577304956 * 100000000).astype(int)
dfblocks
Normalization pipeline overview¶
The normalization pipeline applies the following 14 steps, which are explained and illustrated in the sections below.
- Fix encoding errors
- fix windows1252 text read as iso8859-1
- fix utf8 text read as windows1252
- fix windows1252 text read as utf8
- merge Unicode combining chars
- ignore control chars
- Remove display attributes
- replace latin letter symbols
- replace latin letter ligatures
- replace latin number symbols
- Normalize visually equivalent chars
- replace equivalent chars
- replace cyrillic and greek chars looking like latin letters
- Encode infrequent chars while losing a little bit of information
- replace infrequent latin letters with diacritics
- replace infrequent chars from other scripts
- replace infrequent symbols
- ignore remaining chars with no glyph
The statistics below count the number of chars normalized for 1 million chars in 4 distinct parts of the french datasets : business websites, forums, news, wikipedia.
The first line of the table below shows that :
- in 1 million chars extracted from forum pages (raw users input), 41.8 chars will be encoding errors (windows1252 read as iso8859-1)
- in 1 million chars extracted from wikipedia (curated content), only 0.006 chars will be encoding errors
These numbers show that characters normalization is much more important in real world applications than in academic papers based on clean wikipedia text.
normstats = pd.read_csv(chardatadir / "stats" / "normalization.total.stats.csv")
normstats[["Transform","FreqBusiness","FreqForum","FreqPresse","FreqWikipedia"]]
Most frequent chars replaced from equivalent characters :
replacestats = pd.read_csv(chardatadir / "stats" / "normalization.layer8.stats.csv")
replacestats[["Char","CharName","FreqBusiness","FreqForum","FreqPresse","FreqWikipedia"]].head(20)
For example, list of all Unicode chars wich will be projected to a regular 'apostrophe' :
replacechars = pd.read_csv(chardatadir / "normalizedchars.csv", sep=';')
replacechars[replacechars["NormChar"]=="'"][["Code","Char","CharName"]]
Frequency of characters from other scripts (chinese, arabic, cyrillic ...) :
scriptsstats = pd.read_csv(chardatadir / "stats" / "normalization.layer11.stats.csv")
scriptsstats[["CharFamily","FreqBusiness","FreqForum","FreqPresse","FreqWikipedia"]]
Normalization pipeline API¶
Initialize a text normalizer :
%time norm = TextNormalizer()
norm
Normalize text :
teststring = chr(127995)+"① l`"+chr(156)+"uv"+chr(127)+"re est¨ "+chr(147)+"belle"+chr(148)+"¸ à ½ € énième ‰ "+chr(133)+" ⁽🇪ffic🇦ce⁾ !"
teststring
result = norm(teststring)
result
Describe the changes applied by the normalization pipeline :
print(result.describeChanges())
Compute spans for equivalent substrings before and after normalization :
result.output[0:12]
result.input[result.mapOutputIndexToInput(0):result.mapOutputIndexToInput(12)]
result.output[3:10]
result.input[result.mapOutputIndexToInput(3):result.mapOutputIndexToInput(10)]
Performance test : 2500 sentences per second => fast enough but will be optimized in a later version.
%timeit -n100 norm(teststring)
Appendix : Unicode utility functions¶
Unicode characters properties :
charname("🙂")
charcategory("🙂")
charsubcategory("🙂")
charblock("🙂")
blockfamily('Emoticons')