Serieuze dating

Rated 4.45/5 based on 829 customer reviews

Later, in 2004, the group collected a Blog Authorship Corpus (BAC; (Schler et al.2006)), containing about 700,000 posts to (in total about 140 million words) by almost 20,000 bloggers. Slightly more information seems to be coming from content (75.1% accuracy) than from style (72.0% accuracy). We see the women focusing on personal matters, leading to important content words like love and boyfriend, and important style words like I and other personal pronouns.Computational Linguistics in the Netherlands Journal 4 (2014) Submitted 06/2014; Published 12/2014 Gender Recognition on Dutch Tweets Hans van Halteren Nander Speerstra Radboud University Nijmegen, CLS, Linguistics Abstract In this paper, we investigate gender recognition on Dutch Twitter material, using a corpus consisting of the full Tweet production (as far as present in the Twi NL data set) of 600 users (known to be human individuals) over 2011 and We experimented with several authorship profiling techniques and various recognition features, using Tweet text only, in order to determine how well they could distinguish between male and female authors of Tweets.We achieved the best results, 95.5% correct assignment in a 5-fold cross-validation on our corpus, with Support Vector Regression on all token unigrams.For all techniques and features, we ran the same 5-fold cross-validation experiments in order to determine how well they could be used to distinguish between male and female authors of tweets.In the following sections, we first present some previous work on gender recognition (Section 2). Currently the field is getting an impulse for further development now that vast data sets of user generated data is becoming available. (2012) show that authorship recognition is also possible (to some degree) if the number of candidate authors is as high as 100,000 (as compared to the usually less than ten in traditional studies).Two other machine learning systems, Linguistic Profiling and Ti MBL, come close to this result, at least when the input is first preprocessed with PCA. Introduction In the Netherlands, we have a rather unique resource in the form of the Twi NL data set: a daily updated collection that probably contains at least 30% of the Dutch public tweet production since 2011 (Tjong Kim Sang and van den Bosch 2013).However, as any collection that is harvested automatically, its usability is reduced by a lack of reliable metadata.

One gets the impression that gender recognition is more sociological than linguistic, showing what women and men were blogging about back in A later study (Goswami et al.

Their highest score when using just text features was 75.5%, testing on all the tweets by each author (with a train set of 3.3 million tweets and a test set of about 418,000 tweets). (2012) used SVMlight to classify gender on Nigerian twitter accounts, with tweets in English, with a minimum of 50 tweets.

Their features were hash tags, token unigrams and psychometric measurements provided by the Linguistic Inquiry of Word Count software (LIWC; (Pennebaker et al. Although LIWC appears a very interesting addition, it hardly adds anything to the classification.

In this paper we restrict ourselves to gender recognition, and it is also this aspect we will discuss further in this section.

A group which is very active in studying gender recognition (among other traits) on the basis of text is that around Moshe Koppel. 2002) they report gender recognition on formal written texts taken from the British National Corpus (and also give a good overview of previous work), reaching about 80% correct attributions using function words and parts of speech.

Leave a Reply