Skip to main content
placeholder image

Learning structural representations of text documents in large document collections

Chapter


Abstract


  • The main aim of this chapter is to study the effects of structural representation of text documents when applying a connectionist approach to modelling the domain. While text documents are often processed un-structured, we will show in this chapter that the performance and problem solving capability of machine learning methods can be enhanced through the use of suitable structural representations of text documents. It will be shown that the extraction of structure from text documents does not require a knowledge of the underlying semantic relationships among words used in the text. This chapter describes an extension of the bag of words approach. By incorporating the "relatedness" of word tokens as they are used in the context of a document, this results in a structural representation of text documents which is richer in information than the bag of words approach alone. An application to very large datasets for a classification and a regression problem will show that our approach scales very well. The classification problem will be tackled by the latest in a series of techniques which applied the idea of self organizing map to graph domains. It is shown that with the incorporation of the relatedness information as expressed using the Concept Link Graph, the resulting clusters are tighter when compared them with those obtained using a self organizing map alone using a bag of words representation. The regression problem is to rank a text corpus. In this case, the idea is to include content information in the ranking of documents and compare them with those obtained using PageRank. In this case, the results are inconclusive due possibly to the truncation of the representation of the Concept Link Graph representations. It is conjectured that the ranking of documents will be sped up if we include the Concept Link Graph representation of all documents together with their hyperlinked structure. The methods described in this chapter are capable of solving real world and data mining problems.

Publication Date


  • 2013

Citation


  • Tsoi, A., Hagenbuchner, M., Kc, M. & Zhang, S. (2013). Learning structural representations of text documents in large document collections. In M. Bianchini, M. Maggini & L. Jain (Eds.), Handbook on Neural Information Processing (pp. 471-503). Berlin, Germany: Springer.

International Standard Book Number (isbn) 13


  • 9783642366567

Scopus Eid


  • 2-s2.0-84885572796

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/2073

Book Title


  • Handbook on Neural Information Processing

Start Page


  • 471

End Page


  • 503

Place Of Publication


  • Berlin, Germany

Abstract


  • The main aim of this chapter is to study the effects of structural representation of text documents when applying a connectionist approach to modelling the domain. While text documents are often processed un-structured, we will show in this chapter that the performance and problem solving capability of machine learning methods can be enhanced through the use of suitable structural representations of text documents. It will be shown that the extraction of structure from text documents does not require a knowledge of the underlying semantic relationships among words used in the text. This chapter describes an extension of the bag of words approach. By incorporating the "relatedness" of word tokens as they are used in the context of a document, this results in a structural representation of text documents which is richer in information than the bag of words approach alone. An application to very large datasets for a classification and a regression problem will show that our approach scales very well. The classification problem will be tackled by the latest in a series of techniques which applied the idea of self organizing map to graph domains. It is shown that with the incorporation of the relatedness information as expressed using the Concept Link Graph, the resulting clusters are tighter when compared them with those obtained using a self organizing map alone using a bag of words representation. The regression problem is to rank a text corpus. In this case, the idea is to include content information in the ranking of documents and compare them with those obtained using PageRank. In this case, the results are inconclusive due possibly to the truncation of the representation of the Concept Link Graph representations. It is conjectured that the ranking of documents will be sped up if we include the Concept Link Graph representation of all documents together with their hyperlinked structure. The methods described in this chapter are capable of solving real world and data mining problems.

Publication Date


  • 2013

Citation


  • Tsoi, A., Hagenbuchner, M., Kc, M. & Zhang, S. (2013). Learning structural representations of text documents in large document collections. In M. Bianchini, M. Maggini & L. Jain (Eds.), Handbook on Neural Information Processing (pp. 471-503). Berlin, Germany: Springer.

International Standard Book Number (isbn) 13


  • 9783642366567

Scopus Eid


  • 2-s2.0-84885572796

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/2073

Book Title


  • Handbook on Neural Information Processing

Start Page


  • 471

End Page


  • 503

Place Of Publication


  • Berlin, Germany