Dependency grammar feature based noun phrase extraction for text summarization.

  IJCOT-book-cover
 
International Journal of Computer Trends and Technology (IJCTT)          
 
© - Sep to Oct Issue 2011 by IJCTT Journal
Volume-1 Issue-2                          
Year of Publication : 2011
Authors :Mrs. Dipti Sakhare,Dr. Rajkumar.

MLA

Mrs. Dipti Sakhare,Dr. Rajkumar. "Dependency grammar feature based noun phrase extraction for text summarization."International Journal of Computer Trends and Technology (IJCTT),V2(2):309-313  Sep to Oct Issue 2011 .ISSN 2231-2803.www.ijcttjournal.org. Published by Seventh Sense Research Group.

Abstract: Now a days there is great amount of information available due to the development of Internet technologies. Every time when someone searches something on the Internet, the response obtained is a huge one with lots of information, which is impossible for a person to read completely. Hence one needs means of producing summaries of this information. Summarization is a very interesting and useful task which gives support to many other tasks like information extraction. It takes advantage of the techniques developed for Natural Language Processing tasks. In automatic text summarization most of the times the standard N gram model is used to develop the language model. The N gram models are unable to learn the grammatical relations of the sentences. Hence we propose to use the dependency grammar based noun phrase retrieval as a part of text preprocessing. This can be useful to learn the grammatical rules and thereby may be helpful to extract fundamental semantic units from the natural language text.

References-

[1] Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003.Feature-Rich Part-of-Speech Tagging with a Cycli Dependency Network. In Proceedings of HLT-NAACL 2003 pages 252-259.
[2] Philip Brooks, “SCP: Simple Chunk Parser”, Artificial Intelligence Center, University of Georgia, Athens, Georgia, USA, May 2003, www.ai.uga.edu/mc/ProNTo/Brooks.pdf.
[3] J. Bellegarda, “Statistical language model adaptation: review and perspectives,” Speech Communication, vol. 42, no. 1, pp. 93–108, 2004.
[4] R. Rosenfeld, “A whole sentence maximum entropy language model,” in Proceedings of the IEEE Work- shop on Speech Recognition and Understanding, 1997.
[5] R. Rosenfeld, S. Chen, and X. Zhu, “Whole-sentence exponential language models: A vehicle for linguistic-statistical integration,” Computers, Speech and Language, vol. 15, pp. 55–73, 2001.
[6] P. Tapanainen and T. Järvinen, “A non-projective dependency parser,” in Proceedings of the fifth con- ference on Applied natural language processing, pp. 64–71, Association for Computational Linguistics, 1997.
[7] J. Nivre, “An efficient algorithm for projective dependency parsing,” in Proceedings of the 8th Interna- tional Workshop on Parsing Technologies (IWPT), pp. 149–160, Citeseer, 2003.
[8] F. Amaya and J. Benedí, “Improvement of a Whole Sentence Maximum Entropy Language Model Using Grammatical Features,” in Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, p. 17, Association for Computational Linguistics, 2001.
[9] Giuseppe Attardi, Massimiliano Ciaramita: Tree Revision Learning for Dependency Parsing. Proceedings of HLT-NAACL 2007, Rochester, 2007.pp 388-395.
[10] Aoife Cahill, Michael Burke, Ruth O`Donovan, Stefan Riezler, Josef van Genabith and Andy Way (2008) Wide-Coverage Deep Statistical Parsing using Automatic Dependency Structure Annotation, Computational Linguistics, Vol. 34, No. 1, pages 81-124. September 2008.
[11] Katrin Fundel, Robert Küffner, Ralf Zimmer. RelEx - Relation extraction using dependency parse trees. Bioinformatics, vol 23, no. 3, pp. 365-371, 2007.
[12] Marie-Catherine de Marne_e and Christopher D. Manning, “Stanford typed dependencies manua

KeywordsText Summarization, Natural Language Processing, language model, dependency grammar.