This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Automatic Spell Checker and Correction for Under-represented Spoken Languages: Case Study on Wolof

Thierno Ibrahima Cissé
Université du Québec à Montréal
[email protected]
&Fatiha Sadat
Université du Québec à Montréal
[email protected]
Abstract

This paper presents a spell checker and correction tool specifically designed for Wolof, an under-represented spoken language in Africa. The proposed spell checker leverages a combination of a trie data structure, dynamic programming, and the weighted Levenshtein distance to generate suggestions for misspelled words. We created novel linguistic resources for Wolof, such as a lexicon and a corpus of misspelled words, using a semi-automatic approach that combines manual and automatic annotation methods. Despite the limited data available for the Wolof language, the spell checker’s performance showed a predictive accuracy of 98.31% and a suggestion accuracy of 93.33%.

Our primary focus remains the revitalization and preservation of Wolof as an Indigenous and spoken language in Africa, providing our efforts to develop novel linguistic resources. This work represents a valuable contribution to the growth of computational tools and resources for the Wolof language and provides a strong foundation for future studies in the automatic spell checking and correction field.

\sepfootnotecontent

wolnews https://www.wolof-online.com/ \sepfootnotecontentwoltwi https://twitter.com/SaabalN \sepfootnotecontentwolrel http://biblewolof.com/ \sepfootnotecontenttools https://github.com/TiDev00/Wolof_SpellChecker

1 Introduction

Linguistic diversity in Natural Language Processing (NLP) is essential to enable communication between different users and thus the development of linguistic tools that will serve the inclusion of diverse communities. Several research studies for low-resource languages have emerged; however spoken Indigenous and Endangered languages in Africa have been neglected, even though the cultural and linguistic richness they contain is inestimable.
The Wolof language, a popular language in west Africa spoken by almost 10 million individuals worldwide, is an immensely popular lingua franca in countries on the African continent, such as Senegal, Gambia and Mauritania. It serves as the primary dialect of Senegal (Diouf et al., 2017), hailing from the Senegambian branch of Niger-Congo’s expansive language family. Furthermore, the language has been officially acknowledged in West Africa (Eberhard et al., 2019). It is therefore not surprising that the intensive use of Wolof within the region has allowed it to be recognized as being of paramount importance.
Like several Indigenous and spoken languages of Africa, Wolof presents many challenges and issues, among which is the lack of linguistic resources and tools. Moreover, it is distinguished by its distinct tonal system which uses nasal vowels. The Wolof script is comprised of a total of 45 consonant phonemes, which are further subdivided into categories (Cissé, 2004). Table 1 illustrates the various Wolof consonants and their respective classifications.

Consonants
Weak Strong
Geminate Prenazalized
p, t, c,
k, q, b,
d, j, g,
m, n, ñ,
\tipaencodingŋ, f, r,
s, x, w,
l, y
pp, tt, cc,
kk, bb, dd,
jj, gg, \tipaencodingŋŋ,
ww, ll, mm,
nn, yy, ññ, qq
mp, nt, nc,
nk, nq, mb,
nd, nj, ng
Table 1: Wolof Consonants and Classifications

Furthermore, the Wolof writing system integrates a set of 17 vowel phonemes (Cissé, 2004) complementing the already existing 45 consonant phonemes. Table 2 provides an overview of the Wolof vowels and their respective classifications.

Vowels
Short Long
a, à, ã,
i, o, ó,
u, e, ë, é
ii, uu, éé,
óó, ee, oo, aa
Table 2: Wolof Vowels and Classifications

As writing becomes increasingly important due to our digital age, automatic spell checking plays a vital role in making sure written communications are both efficient and accurate. Despite the lack of standardization in their orthography, there has been a surge of interest to develop spell checkers for African Indigenous languages due to their growing importance in education, commerce, and diplomacy. Consequently, the development of spell checkers for these languages is slowly increasing.

Our main contribution in this paper, is the development of new resources for the Wolof language. Specifically, we have created a spell checker for the autocorrection of Wolof text, as well as a corpus of misspelled words that will enable researchers to evaluate the performance of future autocorrection systems. Additionally, we have developed a Wolof lexicon that can be leveraged for a range of tasks beyond autocorrection, such as neural machine translation, automatic speech recognition, etc.
The resources that have been developed over the course of this study are made publicly accessible on GitHub\sepfootnotetools, thereby enabling wider dissemination and facilitating the reproducibility of the research findings.

The remainder of our paper is structured as follows: in Section 2, we conduct a brief literature review and discuss some published studies. In Section 3, we describe our proposed methodology and the novel linguistic resources, we developped. In Section 4, we present results and evaluations of our study. In Section 5, we show the limitations of our system through an error analysis. Finally, section 6 concludes the paper and show some promising perspectives for the future.

2 Background

Spelling correction consists in suggesting valid words closer to a wrong one. In order to create an automatic spelling correction system, it is imperative to comprehend the root causes of spelling errors (Baba and Suzuki, 2012).
Several studies about spelling errors have been done, with a notable contribution from (Mitton, 1996) who thoroughly analyzed different types of spelling mistakes for English and described methods to construct an automatic spelling correction system. (Kukich, 1992), on the other hand, presented a survey on documented findings on spelling error patterns and categorized spelling errors into two groups:

  • Lexical errors: Result of mistakes applied to individual words, regardless of their context within a sentence (Ten Hacken and Tschichold, 2001).

  • Grammatical errors: Include both morphological and syntactical errors. Morphological errors involve deficiencies in linguistic elements such as derivation, inflection, prepositions, articles, personal pronouns, auxiliary verbs, and determiners. Syntactical errors result from issues in linguistic components, including passive voice, tense, noun phrases, auxiliary verbs, subject-verb agreement, and determiners (Gayo and Widodo, 2018).

The causes of spelling errors are diverse and can stem from both cognitive and typographical sources (Peterson, 1980). Cognitive errors arise when an individual lacks the proper understanding of the correct spelling of a word while typographical errors take place when incorrect keystrokes are made when typing. Literature in the field of spelling correction has typically approached these error types separately, with various techniques developed specifically to address each type (Kukich, 1992).
Despite the significance of language processing, there has been a shortfall of focus on the creation of automatic spelling correction tools for low resource languages especially. While some attempts have been made to apply standard automatic spelling correction techniques to a few African indigenous languages (Boago Okgetheng et al., 2022; M’eric, 2014; Salifou and Naroua, 2014), no such efforts have been made for the Wolof language. As far as we are aware, the only research solely dedicated to the correction of Wolof is (Lo et al., 2016), which provides an overview of the state of the art and outlines potential solutions for developing a tailored orthographic corrector for Wolof. This research adopts commonly used approaches in the field and assesses the performance of our system using various known evaluation metrics.

3 Methodology

The system outlined in this study aims to identify and correct non-word errors in Wolof language. To achieve this objective, we designed and implemented the flowchart in Figure 1.

Text inputPreprocessing Rules Validator Lexicon check Trie dictionary lookup Correct word Misspelled word Word Transformation Generate suggestions Trie + WLD Rank suggestions Correct with first suggestion Error detection YesYesNoNoError correction
Figure 1: Flowchart of the spell checker

The flowchart in Figure 1 provides a visual representation of the various components and processes involved in the proposed spell checker system. The system includes input and output mechanisms, algorithms for error detection and correction, as well as data structures and models that aid in its functionality. The aim of the system is to accurately identify and rectify non-word errors in the Wolof language. Through the presentation of the system’s overall architecture, the reader will be able to comprehend the workings of the system and its design principles aimed at detecting and correcting invalid words in the Wolof language.

3.1 Wolof Lexicon Generation

Our approach to generating a reliable Wolof lexicon involves the combination of manual annotation and automatic extraction methods.
First, manual annotation was performed on a corpus of Wolof text (James Cross et al., 2022) to identify unique words and extract them into a list. This methodology provides a thorough examination of the Wolof language and ensures the precision of the lexicon by enabling manual control over the inclusion of words.
Second, an automatic extraction was performed using Optical Character Recognition (OCR) methods and implemented in the form of Python scripts, applied to several Wolof-French dictionaries (Fal et al., 1990 and Diouf and Kenkyūjo, 2001). This methodology facilitates the expansion of the lexicon’s coverage and enables the identification of additional words that may not have been captured through the manual annotation alone.
Finally, to ensure the lexicon’s accuracy, the overall resulting data underwent proofreading. This step validated the correctness of the words and their spellings and allowed for any necessary revisions before the final lexicon was generated.
It is important to note that due to the limited availability of Wolof resources, the resulting lexicon only contains 1410 different words. Despite this constraint, the combination of manual annotation and automatic extraction methods allowed the generation of a reliable Wolof lexicon.

3.2 Preprocessing

Our spell checking system implements a preliminary stage, which entails the removal of inputs that contain numerical characters, punctuation marks, or borrowed words from foreign languages. The outcome of this step serves as the basis for the detection of non-word errors in the text.
The preprocessing phase consists of three primary operations: the elimination of punctuation marks, normalization of the input, and segmentation of the text into individual words.

3.2.1 Punctuation Removal

The goal of the punctuation removal in our preprocessing step is to eliminate any non-essential punctuation marks present in the input text. These marks can hinder the efficiency of the spell checking process and may cause confusion during the analysis of the text (Rahimi and Homayounpour, 2022). By removing these marks, we ensure that the subsequent stages of the spell checking system can process the text more effectively and efficiently. The algorithm employed in this stage scans the input text and removes all instances of punctuation marks, including commas, periods, exclamation marks, and question marks. The output of this step is a cleaned text that is free of the extraneous elements and ready for focused analysis in subsequent stages of the system.

3.2.2 Normalization

The normalization phase in our preprocessing step transforms the input text into a standardized form by converting all alphabetic characters to lowercase and removing any words outside of the Wolof language.
The conversion to lowercase is essential as many NLP techniques treat words in different cases as separate entities, and converting the text to lowercase eliminates this case sensitivity impact on the analysis (HaCohen-Kerner et al., 2020).
By removing words from foreign languages, we aim to ensure that the text being analyzed is only in the target language and minimize the effect of words that may not hold semantic significance in the context of the Wolof language. This enhances the accuracy of the analysis and reduces the likelihood of introducing errors into the results.

3.2.3 Word Tokenization

Tokenization is a critical step in the automatic spell checking process, as it segments the input text into smaller units referred to as tokens.
There is a range of techniques used for tokenization that vary depending on the language and task at hand. These may include splitting on whitespace and punctuation, using regular expressions, or utilizing dictionaries and morphological rules (Dalrymple et al., 2006).
The tokenization process results in units that can range from individual characters or words to phrases or even full sentences. However, word tokenization is the most commonly used form of segmentation in spell checking systems, as it separates the text into individual words and provides a solid foundation for the identification of spelling errors (Mosavi Miangah, 2013; Rahman et al., 2021; Abdulrahman and Hassani, 2022).
Word tokenization not only enhances the accuracy and efficiency of the spell checking process but also allows for an analysis of the context in which each word appears. This enables the spell checking system to make more informed suggestions for appropriate spelling corrections, ultimately improving the accuracy of the results.
In the current investigation, we are implementing the process of tokenization with a focus on word-level segmentation.

3.3 Error Detection

The error detection stage in our spell checking system is designed to identify non-word errors in the text. This is achieved through a two-step process, consisting of validation against Wolof writing rules and comparison with a constructed lexicon.

3.3.1 Rules Validator

The spelling of words in the Wolof language follows certain conventions, as described by the CVC and CVCV(C) forms for monosyllabic and disyllabic words, respectively (Merrill, 2021). These conventions specify that the final consonant and vowel of a syllable cannot both be long, and strong consonants cannot appear after a long vowel or at the beginning of a word, except for prenasalized consonants.
Our error detection stage includes a validation step that rigorously checks each word in the input text against these writing conventions. If a word is found to be in compliance with these rules, it will move on to the next stage of validation. Conversely, if the word is determined to be non-compliant, it will be flagged as invalid and require correction.

3.3.2 Lexicon Check

In the lexicon verification phase, the spell checking system assesses each word in the input text against the Wolof lexicon to determine its validity. The lexicon, being a large repository of words, can pose challenges for quick and efficient searches. To address this, various techniques such as hash tables (Kukich, 1992), binary search (Knuth, 1998), tries data structure (Bentley and Sedgewick, 1997), and bloom filter (Bloom, 1970) have been developed to enable fast dictionary lookups.
In the present spell checker, the system uses the trie data structure, which organizes the lexicon into nodes that represent individual characters and the root node that represents the empty string. In this structure, searching for a word in the lexicon involves following the path through the trie that corresponds to the characters of the target word (Feng et al., 2012). If the end of the path is a terminal node, the word is considered to be in the lexicon and deemed valid. Conversely, if the path ends before reaching a terminal node, the word is considered incorrect and corrections are initiated.

3.4 Error Correction

The last stage of the spell correction procedure is to produce potential replacements for the incorrectly spelled word. Our correction techniques, described below, focus exclusively on the word and do not consider the context in which it appears. The correction process is comprised of three distinct phases: Translation of French Compound Sounds, Generation of Candidate Suggestions, and Ranking of those Suggestions.

3.4.1 Translation of French compound sounds

Prior to implementing the module responsible for translating French compound sounds into Wolof, we collected a small amount of Wolof data from various sources. This data was sourced from news websites\sepfootnotewolnews, social media platforms\sepfootnotewoltwi and religious websites\sepfootnotewolrel. A thorough analysis of this data was carried out to determine the most common misspellings made by Wolof speakers when writing in the language. Our findings indicated that a significant number of these errors were due to the usage of the French alphabet instead of the Wolof alphabet. This often resulted in the presence of French compound sounds or letters that are not native to the Wolof language. Furthermore, it was observed that accents, which play a crucial role in ensuring proper pronunciation and meaning of words, were frequently neglected. To showcase these findings, Table 3 presents some of the misspellings observed and their correct Wolof equivalent.

Misspellings Correct Wolof
dadialé dajale (to gather)
guinaw ginnaaw (behind)
mousiba musiba (danger)
deuk dëkk (village)
thiossane cosaan (tradition)
gnopati ñoppati (to pinch)
niaar ñaar (two)
sakhar saxaar (train)
tank tànk (foot)
Table 3: Misspellings and correct wolof words

Taking into consideration the common errors observed in the analysis of Wolof language data, our system is designed to assess each word for the presence of French compound sounds or letters that are extraneous to the Wolof alphabet. Should such sounds be detected, the module will translate them into their corresponding Wolof counterparts. Letters not belonging to the Wolof alphabet will be systematically eliminated. Upon completing these transformations, the output will be directed to the next phase of the correction process and candidate suggestions module.

3.4.2 Generation of Candidate Suggestions

In the current system, to generate potential alternatives for misspelled words, we have implemented a lexicographical distance comparison method. This process involves determining the minimum number of edit operations, such as insertion, deletion, transposition, and substitution, necessary to change one word into another (Vienney, 2004). The more significant the disparities between two words, the greater the lexicographical distance between them. Out of various lexicographical distance metrics, the Levenshtein Distance (Levenshtein, 1965) is the most commonly utilized. It quantifies the difference between two strings based on the three fundamental string operations: substitution, insertion, and deletion.
Let Levα,βLev_{\alpha,\beta} be the Levenshtein distance between the subsequence formed with the α\alpha first characters of a word W1W_{1} and the subsequence formed with the β\beta first characters of a word W2W_{2}. The Levenshtein distance between the two subsequences W1W_{1} and W2W_{2} (of length |W1||W_{1}| and |W2||W_{2}| respectively) can be recursively calculated using Formula 1 (Levenshtein, 1965).

Levα,β={max(α,β)if min(α,β)=0min{levα1,β+1levα,β1+1levα1,β1+1(W1αW2β)Lev_{\alpha,\beta}=\left\{\begin{array}[]{ll}\max(\alpha,\beta)\hskip 25.6073pt\textbf{if }\min(\alpha,\beta)=0\\ \min\left\{\begin{array}[]{l}lev_{\alpha-1,\beta}+1\\ lev_{\alpha,\beta-1}+1\\ lev_{\alpha-1,\beta-1}+1_{(W_{1\alpha}\neq W_{2\beta})}\end{array}\right.\end{array}\right. (1)

The Levenshtein distance, computed using its recursive equation, can be computationally expensive (Gusfield, 1997), especially for large distances as it has an exponential time complexity of O(3min(|W1|,|W2|))O(3^{min(|W_{1}|,|W_{2}|)}). To address this issue, our approach combines two techniques: dynamic programming and the trie data structure.
Dynamic programming (Almudevar, 2001), as a technique for solving problems by decomposing them into more manageable subproblems and storing the solutions, helps reduce the number of redundant calculations by providing a more efficient storage of intermediate results. When applied to the Levenshtein distance, it allows for the intermediate results of partial computations to be stored in a matrix, leading to a more efficient calculation of the final result.
By combining dynamic programming and the trie data structure, our approach effectively prunes the search space and avoids redundant calculations. This provides a powerful combination for computing the Levenshtein distance in a fast and efficient manner, even for large inputs.
In the standard Levenshtein distance, all edit operations are assigned a uniform cost of 1. However, considering the findings discussed earlier, a cost matrix was introduced to allow for the assignment of varying costs to different edit operations. This allows for a more nuanced representation of the importance of each operation. The cost for insertions and deletions remains at 1 for all characters. Substitution operations between source and target characters are assigned a cost of 1 if the character couple is listed in Table 4, otherwise, a cost of 2 is assigned.

Couple Substitution cost
(’a’, ’à’) 1
(’a’, ’ã’) 1
(’o’, ’ó’) 1
(’e’, ’é’) 1
(’e’, ’ë’) 1
(’é’, ’ë’) 1
(’x’, ’q’) 1
Table 4: Substitution cost of specific couples

Our suggestion module generates potential candidate words for a given misspelled word through the computation of the edit distance between the misspelled word and each valid word in the Wolof lexicon. For each candidate word, the cost of transforming the misspelled word into the candidate word is provided.

3.4.3 Ranking of candidate suggestions

In the following phase of our methodology, the candidate words generated from the previous stage are subjected to evaluation. The ranking is performed based on the proximity of the candidate words to the misspelled word, with the candidate word having the smallest edit cost being assigned the highest rank. The candidate word with the lowest edit cost is determined to be the closest match and is therefore selected as the most likely substitution for the incorrect word.

4 Evaluations

In order to assess the performance of our spell checking system, we first constructed a corpus of misspelled words, then selected the appropriate evaluation metrics, and finally, we implemented the chosen metrics to assess the performance of the system.

4.1 Generation of a Misspelled Word Corpus

The creation of a Misspelled Word Corpus followed a similar method as the generation of the Wolof lexicon. We used a hybrid approach of manual and automatic annotation, followed by proofreading. The method involved the selection of commonly misspelled Wolof words discovered through social media, religious websites, and news websites. For each misspelling, we manually added its correction. This process resulted in the formation of a corpus consisting of 3070 words, with 1075 valid words and 1995 invalid words. The edit distance between the misspelled words and their corrected forms is presented in Table 5.

Edit Distance Count Percentage
1 400 20.05%
2 412 20.65%
3 445 22.31%
4 281 14.09%
5 204 10.23%
6 114 5.71%
7 67 3.36%
8 36 1.80%
9 23 1.15%
10 9 0.45%
11 2 0.1%
12 1 0.05%
13 1 0.05%
Total 1995 100%
Table 5: Edit distance of misspellings against their corrections

4.2 Selection of the Evaluation Metrics

There are various factors to consider in evaluating spelling checkers. Conventional metrics, including recall and precision, have been widely used for a considerable time to gauge the linguistic proficiency of such tools. nevertheless, from a usage-centered approach, these evaluation parameters have limitations due to the absence of certain variables intrinsic to the evaluation of spelling checkers in these metrics.

To determine the reliability of our spell checker, we employed the metrics proposed in (Starlander and Popescu-Belis, 2002), (Voorhees and Garofolo, 2000), (Paggio and Music, 1998), (Paggio and Underwood, 1998), (King, 1999) as well as the following measures:

  • True positive (TP): correct word which is recognized as correct by the spell checker.

  • False positive (FP): incorrect word which is recognized as correct by the spell checker.

  • False negative (FN): correct word which is recognized as incorrect by the spell checker.

  • True negative (TN): incorrect word which is recognized as incorrect by the spell checker.

Despite their age, these metrics remain widely used in the current state of the art for evaluating spell checkers, particularly those designed for low-resource languages such as (Abdulrahman and Hassani, 2022) and (Boago Okgetheng et al., 2022).

The other used metrics in these evlautions, are described as follows:

4.2.1 Lexical Recall or 𝑹𝒄R_{c}

It is determined by calculating the ratio of correctly recognized valid words in the text by the spell checker, to the total number of accurate words in the same text, as shown in Formula 2 (Starlander and Popescu-Belis, 2002).

Rc=TpTp+FnR_{c}=\frac{T_{p}}{T_{p}+F_{n}} (2)

4.2.2 Error Recall or 𝑹𝒊R_{i}

It is expressed as the fraction of incorrect words in the text detected by the spell checker, compared to the overall number of incorrect words in the text, as shown in Formula 3 (Starlander and Popescu-Belis, 2002).

Ri=TnTn+FpR_{i}=\frac{T_{n}}{T_{n}+F_{p}} (3)

4.2.3 Lexical Precision or 𝑷𝒄P_{c}

It is calculated by dividing the total number of valid words accurately recognized by the spelling checker by the sum of valid words recognized by the spell checker and the quantity of invalid words that were not identified by the spell checker as incorrect, as shown in Formula 4 (Starlander and Popescu-Belis, 2002).

Pc=TpTp+FpP_{c}=\frac{T_{p}}{T_{p}+F_{p}} (4)

4.2.4 Error Precision or 𝑷𝒊P_{i}

It is determined by dividing the number of accurate flags made by the spell checker by the total number of flags issued by the system, as shown in Formula 5 (Starlander and Popescu-Belis, 2002).

Pi=TnTn+FnP_{i}=\frac{T_{n}}{T_{n}+F_{n}} (5)

4.2.5 Lexical F-measure or 𝑭𝒎𝒄Fm_{c}

It enables the calculation of the harmonic mean between lexical recall and lexical precision, as shown in Formula 6 (Starlander and Popescu-Belis, 2002).

Fmc=21Rc+1PcFm_{c}=\frac{2}{\frac{1}{R_{c}}+\frac{1}{P_{c}}} (6)

4.2.6 Error F-measure or 𝑭𝒎𝒊Fm_{i}

The Error F-measure is calculated by computing the harmonic mean of lexical recall and lexical precision, as shown in Formula 7 (Starlander and Popescu-Belis, 2002).

Fmi=21Ri+1PiFm_{i}=\frac{2}{\frac{1}{R_{i}}+\frac{1}{P_{i}}} (7)

4.2.7 Predictive Accuracy or 𝑷𝑨PA

It quantifies the probability of any word, whether correct or incorrect, being processed correctly by the spelling checker. It is calculated using Formula 8 (Starlander and Popescu-Belis, 2002).

PA=Tp+TnTp+Tn+Fp+FnPA=\frac{T_{p}+T_{n}}{T_{p}+T_{n}+F_{p}+F_{n}} (8)

4.2.8 Suggestion Adequacy or 𝑺𝑨SA

It measures the ability of our spell checker to suggest accurate spelling alternatives for a misspelled word. Let SS denote a proper recommendation for an incorrect word and NN represent the total number of misspelled words. The Suggestion Adequacy of our system is calculated using Formula 9 (Starlander and Popescu-Belis, 2002).

SA=1Ni=1nSiSA=\frac{1}{N}\sum_{i=1}^{n}S_{i} (9)

4.2.9 Mean Reciprocal Rank or 𝑴𝑹𝑹MRR

As previously stated, our spell checker systematically selects the first word in the list of recommendations as the most likely substitution for the misspelled word. However, as selecting the initial option in the recommended list may not always be the appropriate choice, we will utilize the MRRMRR metric to assess the ranking methodology. Let NN be the total number of incorrect words and Ranki,cRank_{i,c} be the position of the correct suggestion in the list of suggestion for the ithi^{th} misspelled word in NN. The MRRMRR is computed using the Formula 10 (Voorhees and Garofolo, 2000).

MRR=1Ni=1N1Ranki,cMRR=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{Rank_{i,c}} (10)

4.3 Experiments

4.3.1 Results

The results of our spell checker, as demonstrated in Table 6, exhibit a remarkable level of proficiency in various aspects of spelling correction.

Metrics Ratio Percentage
RcR_{c} 1023/1075 95.16%
RiR_{i} 1995/1995 100%
PcP_{c} 1023/1023 100%
PiP_{i} 1995/2047 97.46%
FmcFm_{c} 0.9752 97.52%
FmiFm_{i} 0.9871 98.71%
PAPA 3018/3070 98.31%
SASA 1862/1995 93.33%
MRRMRR 0.9604 96.04%
Table 6: Performance measures of the spell checker

The recall score of 95.16% (RcR_{c}) and 100% (RiR_{i}) depicts the comprehensive nature of the lexicon utilized by the spell checker, as well as its relatively unspoiled status.
The spell checker exhibits an exceptional level of precision, with a score of 100% (PcP_{c}) and 97.46% (PiP_{i}), indicating its reliability in accurately identifying spelling errors as well as valid words.
The F-measure scores of 97.52% (FmcFm_{c}) and 98.71% (FmiFm_{i}) demonstrate the spell checker’s avoidance of simplistic strategies, thereby ensuring its efficiency.
The spell checker’s suggestion accuracy (SASA) score of 93.33% attests to the suitability and veracity of the most probable alternative to the misspelled word presented to the end-user.
The mean reciprocal rank (MRRMRR) score of 96.04% highlights the quality of the ranking of suggestions presented by the spell checker.
Finally, the overall linguistic performance of the spell checker, as indicated by its predictive accuracy (PAPA) score of 98.31%, is of a highly satisfactory nature.

4.3.2 Errors analysis

To fully understand and identify the linguistic limitations of our spell checker, we conducted an investigation into the edit distances of the misspelled words for which the system produced an incorrect suggestion. The outcome of this study is presented in Table 7.

Edit Distance Count Percentage
1 4 3.01%
2 17 12.78%
3 32 24.06%
4 20 15.04%
5 22 16.54%
6 13 9.77%
7 10 7.52%
8 11 8.27%
9 3 2.26%
10 1 0.75%
Total 133 100%
Table 7: Edit distance of misspellings with wrong suggestions

After a thorough examination of the results displayed, we surprisingly noted that there was no significant linear correlation between the edit distance of a misspelled word and the probability of the spell checker generating incorrect suggestions. These findings are in line with those displayed in Table 5. The majority of words in our misspelled word corpus had an edit distance of 3, which increased the likelihood of the spell checker producing a wrong suggestion for misspelled words with an edit distance of 3. Additionally, as misspelled words with edit distances of 11, 12, and 13 were under-represented in our corpus, the spell checker’s suggestions for these words were all accurate. This reinforces our conclusion that the higher the frequency of misspelled words with a specific edit distance, the greater the chances of the spell checker generating inaccurate suggestions for misspelled words with that same edit distance.

5 Limitations

Despite the impressive performance and minimal processing time of our spell checker, it is important to acknowledge its limitations.
Firstly, the spell checker is restricted to the words included in the created Wolof lexicon and cannot recognize words outside of it. Secondly, the weighted Levenshtein distance algorithm used may not always accurately reflect the likelihood of different types of errors, leading to potential inaccuracies in the suggestions.
Thirdly, the dynamic programming and trie data structures utilized may result in false positive suggestions due to a lack of consideration for the semantic meaning of words. Additionally, the computational cost of our approach can be substantial, particularly for larger lexicons or words with numerous possible corrections. Finally, the lack of context awareness may result in missed errors or incorrect suggestions.

6 Conclusion

This paper presented a novel spell checker for the Wolof language, that has demonstrated its potential, owing to its effective combination of the trie data structure, dynamic programming, and weighted Levenshtein distance algorithms. The hybrid approach of manual and automatic annotation enabled the construction of a comprehensive lexicon and a robust Misspelled Word Corpus, allowing for a robust evaluation of the spell checker’s potential despite the limited data available for the language. Through these efforts, we hope to advance the state of NLP research for the Wolof language and contribute to preserving the linguistic heritage of African nations, ensuring that their distinct cultural expressions are protected for future generations.
The findings of this research provide compelling evidence of the viability of the spell checker for the Wolof language, opening avenues for further improvement and exploration.

For future research, it would be of interest to study the effect of increasing the lexicon and Misspelled Word Corpus on the spell checker’s performance. Furthermore, a comparison of the spell checker’s performance with other spell-checking methods used in low-resource languages, such as the Indigenous African languages, could provide valuable insights into the strengths and weaknesses of the current approach. The integration of state-of-the-art techniques,taking into consideration the context, such as those based on machine learning and Deep Neural Networks, into the spell checker could also be explored to further enhance its capabilities.

Acknowledgements

We thank the anonymous reviewers for their helpful comments and feedback.
We also thank the participants of the Wolof community for giving their time, wisdom, and expertise.

References