To build a database of lexical words from a corpus of texts divided into text types and to rate each word as to the probability of its occurrence within that text type.
This involves being able to store text files, take each lexical word and count it within the overall corpus, count it within the text type (novel, newspaper article, etc) and then say which text type it is most likely to occur in, or - alternatively - give the probabilities of its occurrence in each text type.
The text corpus will initially be about 10 million words, but will grow to about 100 million, so only programs which are very fast will be useful.
Results need to be represented numerically and if possible graphically. Programmers need to be highly numerate, preferably with a reasonable knowledge of statistics, intermediate level.
Phase 2: the user inputs a text and the text is 'typed' according to the probability of each word in the text occurring in a particular text type.
Phase 3: other clasifications will be important eventually, e.g. age, gender, social background of author of text. We will eventually need to profile each corpus author and thereby rate the user's text's author's profile on the basis of the corpus.
It would be ideal if the program can be web-based AND on a user's p.c.
You will need an ftp address where I can upload the corpus of texts to.