International Journal of Computer
Trends and Technology

Research Article | Open Access | Download PDF

Volume 72 | Issue 8 | Year 2024 | Article Id. IJCTT-V72I8P118 | DOI : https://doi.org/10.14445/22312803/IJCTT-V72I8P118

Large Language Models: Revolutionizing Pervasive Computing


Meenakshi Sundaram Ambasamudram Sailappan

Received Revised Accepted Published
21 Jun 2024 25 Jul 2024 12 Aug 2024 31 Aug 2024

Citation :

Meenakshi Sundaram Ambasamudram Sailappan, "Large Language Models: Revolutionizing Pervasive Computing," International Journal of Computer Trends and Technology (IJCTT), vol. 72, no. 8, pp. 125-129, 2024. Crossref, https://doi.org/10.14445/22312803/ IJCTT-V72I8P118

Abstract

This paper explores the transformative role of Large Language Models (LLMs) in advancing pervasive computing and examines how LLMs enhance natural language processing, context awareness, and multimodal integration, thereby enabling more intuitive human-computer interactions and intelligent environments. The paper also addresses the challenges and future prospects of integrating LLMs into pervasive computing systems, including detailed case studies demonstrating practical applications.

Keywords

Pervasive computing, Artificial Intelligence, Internet of Things ( IoT), Natural language processing, Large language models.

References

[1] Mark Weiser, “The Computer for the 21st Century,” Scientific American, vol. 265, no. 3, pp. 94-104, 1991.
[Google Scholar] [Publisher Link]
[2] Tom B. Brown et al., “Language Models are Few-Shot Learners,” Proceedings of Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 1-75, 2020.
[Google Scholar] [Publisher Link]
[3] Aditya Ramesh et al., “Zero-Shot Text-to-Image Generation,” Proceedings of the 38th International Conference on Machine Learning, vol. 139, pp. 8821-8831, 2021.
[Google Scholar] [Publisher Link]
[4] Alec Radford et al., “Learning Transferable Visual Models from Natural Language Supervision,” Proceedings of the 38th International Conference on Machine Learning, vol. 139, pp. 8748-8763, 2021.
[Google Scholar] [Publisher Link]
[5] Rishi Bommasani et al., “On the Opportunities and Risks of Foundation Models,” Center for Research on Foundation Models (CRFM), Stanford University, pp. 1-214, 2021.
[Google Scholar] [Publisher Link]
[6] Mark Chen et al., “Evaluating Large Language Models Trained on Code,” Arxiv Preprint, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early Experiments with GPT-4,” Arxiv Preprint, pp. 1-155, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Meredith Ringel Morris et al., “Levels of AGI for Operationalizing Progress on the Path to AGI,” Arxiv Preprint, 2023.
[CrossRef] [Google Scholar] [Publisher Link]