[CWB] Strange issue with character encoding (?) in frequency lists
Hardie, Andrew
a.hardie at lancaster.ac.uk
Mon May 27 21:37:26 CEST 2019
OK, that suggests it is using the default collation which is utf8_general_ci.
There is a known issue in frequency tables using CI collations, which is that although all diacritics are folded together, the version that appears is the first version that is seen. Using English, if you have
1 example of naive
+
10 examples of naïve later in the corpus
then what will appear is naive with no diaresis. (same happens with case).
This issue would seem to be behind the cases you report like “mi” and “mí”. The frequency list rolls them together, so it is just luck of the draw which shows up. It’s expected behaviour.
>> However, when I click on the links for these words and go to the concordance, not a single word has these marks. When I further click through and go to the source texts, the marks also aren't there.
This would make sense because the frequency table contains the lemma whereas the concordance contains the word. To see if the lemmas you’ve found have the bad accent, you’d need to use the “download tabulation” function to view the lemma. But, since you’ve looked at the original files, that step is not necessary.
It would be quite possible for a single bad accent that happens to occur early on to muck things up for entire lemmas, which could be the result of one dodgily-encoded text; but if there is nothing in any vrt file, then that suggests something else is going on.
I think the best thing would be, for one word, to use CQP queries to see exactly what is in the index. IE run the following,
[lemma="que"]
[lemma="qúe"]
[lemma="qùe"]
(etc for whatever the possibilities are)
to see how many of each thing are actually there in your CWB data index.
That will narrow down the problem IE determine whether it is a CWB issue or a MySQL issue.
best
Andrew.
From: cwb-bounces at sslmit.unibo.it <cwb-bounces at sslmit.unibo.it> On Behalf Of Scott Sadowsky
Sent: 25 May 2019 21:39
To: Open source development of the Corpus WorkBench <cwb at sslmit.unibo.it>
Cc: Open source development of the Corpus WorkBench <CWB at liste.sslmit.unibo.it>
Subject: Re: [CWB] Strange issue with character encoding (?) in frequency lists
On Sat, May 25, 2019 at 2:20 PM Hardie, Andrew <a.hardie at lancaster.ac.uk<mailto:a.hardie at lancaster.ac.uk>> wrote:
Hi Andrew,
One possibility is that the wrong charset/collation is being activated for the frequency tables. Could you check this?
If you run create table freq_corpus_nameofyrcorpus_word; the mysql command prompt, then the character set / collation should be stated either for the table as a whole, or for the “item” column.
That shows "ENGINE=InnoDB DEFAULT CHARSET=utf8". All my source texts are UTF8, and the database is created as that too, by the way.
Cheers,
Scott
From: cwb-bounces at sslmit.unibo.it <cwb-bounces at sslmit.unibo.it> On Behalf Of Scott Sadowsky
Sent: 25 May 2019 13:45
To: Open source development of the Corpus WorkBench <CWB at liste.sslmit.unibo.it>
Subject: [CWB] Strange issue with character encoding (?) in frequency lists
I've run into a strange issue that might have to do with character encoding (or it might not).
When I go to Corpus Queries > Frequency lists, select my full corpus, choose to view a list based on lemmas, and then hit Show Frequency List, I get a list of lemmas in which quite a few have phantom accent marks and other diacriticals, e.g. "sì", "còmo", "èn", "sú", "ïgual", "cúando" (obviously, this is a Spanish corpus).
However, when I click on the links for these words and go to the concordance, not a single word has these marks. When I further click through and go to the source texts, the marks also aren't there.
I've grepped through my tagger's dictionary files (FreeLing), and none of these forms exist as lemmas or lexemes. I've also grepped through the *.vrt files that the corpus was compiled from, and none of these forms are present.
I've run into an additional strange issue that is probably related. When I make a subcorpus that is an exact copy of the source corpus, the same problem occurs,but most of the spurious accents and such are different (e.g. "qúe", "nó", "á", "sì", "én").
I'm attaching an edited screenshot that shows the top of the frequency list based on the full corpus on the left and the subcorpus that contains the full corpus on the right, with errors in red boxes.
[Lemmas.png]
Of course, in some cases both of the highlighted forms exist in Spanish (e.g. #28, "mi" and "mí"), but in spite of being different they have the same frequencies in the corpus and the subcorpus, which further suggests that it's not the underlying data that's causing this.
Best,
Scott
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://liste.sslmit.unibo.it/pipermail/cwb/attachments/20190527/cfc90968/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 102909 bytes
Desc: image001.png
URL: <http://liste.sslmit.unibo.it/pipermail/cwb/attachments/20190527/cfc90968/attachment-0001.png>
More information about the CWB
mailing list