Investigating the impact of preprocessing on document embedding: an empirical comparison
Résumé
Digital representation of text documents is a crucial task in machine learning and natural language processing (NLP). It aims to transform unstructured text documents into mathematically-computable elements. In recent years, several methods have been proposed and implemented to encode text documents into fixed-length feature vectors. This operation is known as document embedding and it has become an interesting and open area of research. Paragraph vector (Doc2vec) is one of the most used document embedding methods. It has gained a good reputation thanks to its good results. To overcome its limits, Doc2vec, was extended by proposing the document through corruption (Doc2vecC) technique. To get a deep view of these two methods, this work presents a study on the impact of morphosyntactic text preprocessing on these two document embedding methods. We have done this analysis by applying the most-used text preprocessing techniques, such as cleaning, stemming and lemmatisation, and their different combinations. The experimental analysis on the Microsoft Research Paraphrase dataset (MSRP), reveals that the preprocessing techniques serve to improve the classifier accuracy; and that the stemming method outperforms the other techniques.