A Novel Approach to Incorporating LLMs in Mid-size Organizations for Customer Insight Generation Using Tree of Thoughts Methodology |
||
|
|
|
© 2024 by IJCTT Journal | ||
Volume-72 Issue-10 |
||
Year of Publication : 2024 | ||
Authors : Apurva Srivastava, Aditya Patil, Alokita Garg, Amruta Hebli | ||
DOI : 10.14445/22312803/IJCTT-V72I10P120 |
How to Cite?
Apurva Srivastava, Aditya Patil, Alokita Garg, Amruta Hebli, "A Novel Approach to Incorporating LLMs in Mid-size Organizations for Customer Insight Generation Using Tree of Thoughts Methodology," International Journal of Computer Trends and Technology, vol. 72, no. 10, pp. 127-140, 2024. Crossref, https://doi.org/10.14445/22312803/IJCTT-V72I10P120
Abstract
This paper presents a novel approach for mid-size organizations to leverage Large Language Models (LLMs) [2] for generating actionable insights from customer reviews and comments using the Tree of Thoughts (ToT) methodology [3]. LLMs have emerged as powerful tools for various text analytics tasks as natural language processing evolves. [1,2] However, their adoption in mid-size organizations has been limited due to resource constraints and technical complexities [14, 15]. The proposed cost-effective and efficient method leverages the ToT approach to optimize LLM usage for customer feedback analysis in resource-constrained environments. Our method significantly improves insight generation and computational efficiency compared to traditional approaches while requiring minimal LLM expertise [20]. Through a case study, this paper illustrates our approach's practical applications and benefits, providing a roadmap for mid-size organizations to harness the power of LLMs in their customer feedback analysis workflows [21].
Keywords
Customer insights, Large language models, Mid-size organizations, Natural language processing, Tree of thoughts
Reference
[1] Jacob Devlin et al., “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding,” Arxiv, pp. 1-16, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Tom B. Brown et al., “Language Models are Few-Shot Learners,” Arxiv, pp. 1-75, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Shunyu Yao et al., “Tree of Thoughts: Deliberate Problem Solving with Large Language Models,” Advances in Neural Information Processing Systems, pp. 1-14, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Bing Liu, Sentiment Analysis Mining Opinions, Sentiments, and Emotions, 2nd ed., Cambridge University Press, pp. 1-448, 2020.
[Google Scholar] [Publisher Link]
[5] Tomas Mikolov et al., “Distributed Representations of Words and Phrases and their Compositionality,” Advances in Neural Information Processing Systems, vol. 26, pp. 1-9, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Ashish Vaswani et al., “Attention is All You Need,” Advances in Neural Information Processing Systems, vol. 30, pp. 1-11, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Xiang Zhang, Junbo Zhao, and Yann LeCun, “Character-Level Convolutional Networks for Text Classification,” Advances in Neural Information Processing Systems, vol. 28, pp. 1-9, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Kamran Kowsari et al., “Text Classification Algorithms: A Survey,” Information, vol. 10, no. 4, pp. 1-68, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze, Introduction to Information Retrieval, Computers, Cambridge University Press, pp. 1-482, 2008.
[Google Scholar] [Publisher Link]
[10] Daniel Jurafsky, and James H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 3rd ed., Stanford University, pp. 1-599, 2024.
[Google Scholar] [Publisher Link]
[11] Cameron B. Browne et al., “A Survey of Monte Carlo Tree Search Methods,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 1, pp. 1-43, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Murray Campbell, A. Joseph Hoane Jr, and Feng-Hsiung Hsu, “Deep Blue,” Artificial Intelligence, vol. 134, no. 1-2, pp. 57-83, 2002.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Xinyun Chen et al., “Teaching Large Language Models to Self-Debug,” Arxiv, pp. 1-78, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Aakanksha Chowdhery et al., “PaLM: Scaling Language Modeling with Pathways,” Journal of Machine Learning Research, vol. 24, no. 240, pp. 1-113, 2023.
[Google Scholar] [Publisher Link]
[15] Rishi Bommasanim et al., “On the Opportunities and Risks of Foundation Models,” Arxiv, pp. 1-214, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Tomas Mikolov et al., “Efficient Estimation of Word Representations in Vector Space,” Arxiv, pp. 1-12, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Thorsten Joachims, “Text Categorization with Support Vector Machines: Learning with many Relevant Features,” European Conference on Machine Learning, Lecture Notes in Computer Science, vol. 1398, pp. 137-142, 2005.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Leo Breiman, “Random Forests,” Machine Learning, vol. 45, pp. 5-32, 2001.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Colin Raffel et al., “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer,” Journal of Machine Learning Research, vol. 21, no. 140, pp. 1-67, 2020.
[Google Scholar] [Publisher Link]
[20] Thomas Wolf et al., “Transformers: State-of-the-Art Natural Language Processing,” Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Association for Computational Linguistics, pp. 38-45, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Jared Kaplan et al., “Scaling Laws for Neural Language Models,” Arxiv, pp. 1-30, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Emily M. Bender et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?,” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, United States, pp. 610-623, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Frederick F. Reichheld, The One Number You Need to Grow, Harvard Business Review, pp. 1-12, 2003.
[Google Scholar] [Publisher Link]
[24] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” Arxiv, pp. 1-15, 2014.
[CrossRef] [Google Scholar] [Publisher Link]