Exploring the Challenges of Serverless Computing in Training Large Language Models

  IJCTT-book-cover
 
         
 
© 2024 by IJCTT Journal
Volume-72 Issue-4
Year of Publication : 2024
Authors : Kushal Walia
DOI :  10.14445/22312803/IJCTT-V72I4P109

How to Cite?

Kushal Walia, "Exploring the Challenges of Serverless Computing in Training Large Language Models," International Journal of Computer Trends and Technology, vol. 72, no. 4, pp. 71-76, 2024. Crossref, https://doi.org/10.14445/22312803/IJCTT-V72I4P109

Abstract
This paper delves into the exploration of utilizing serverless computing frameworks for the training of Large Language Models (LLMs), a cornerstone of modern artificial intelligence and machine learning advancements. While serverless computing offers significant benefits, including reduced infrastructure costs and enhanced scalability, its application in the context of LLM training introduces a unique set of challenges and limitations. Through an in-depth analysis, this study identifies key obstacles such as statelessness, execution time limits, cold start latency, resource constraints, data management complexities, dependency management, and cost predictability issues that inherently complicate the deployment of LLM training pipelines in a serverless environment. Despite these hurdles, the potential of serverless computing to revolutionize the scalability and cost-efficiency of LLM training remains undeniable. By presenting a balanced view on the feasibility, challenges, and prospective solutions, this paper aims to provide insights into the current state and future possibilities of serverless computing in the realm of large language model training, marking a critical step towards optimizing computational resources in the advancement of AI technologies.

Keywords
Artificial Intelligence, Cloud Computing, Generative Pretrained Transformer, Large Language Models, Serverless Computing.

Reference

[1] Loana Baldini et al., “Serverless Computing: Current Trends and Open Problems,” Research Advances in Cloud Computing, pp. 1-20, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Tom B. Brown et al., “Language Models are Few-Shot Learners,” arXiv, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Jacob Devlin et al., “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding,” arXiv, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Michael Armbrust et al., “Above the Clouds: A Berkeley View of Cloud Computing,” Electrical Engineering and Computer Sciences, University of California at Berkeley, Technical Report, 2009.
[Google Scholar] [Publisher Link]
[5] Kim Hazelwood et al., “Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective,” 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), Vienna, Austria, pp. 620-629, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Michael Roberts, and John Chapin, Designing a Serverless Web Application, AWS Whitepaper, 2017. [Online]. Available: https://aws.amazon.com/whitepapers/serverless-architectures-with-aws-lambda
[7] Liang Wang et al., “Peeking Behind the Curtains of Serverless Platforms,” Proceeding of the 2018 USENIX Annual Technical Conference, (USENIX, ATC 18,) Boston, MA, USA, 2018.
[Google Scholar] [Publisher Link]
[8] Guillaume Lample, and Alexis Conneau, “Cross-lingual Language Model Pretraining,” arXiv, 2019.
[CrossRef] [Google Scholar] [Publisher Link]