Challenges, Solutions, and Best Practices in Post Deployment Monitoring of Machine Learning Models

  IJCTT-book-cover
 
         
 
© 2024 by IJCTT Journal
Volume-72 Issue-11
Year of Publication : 2024
Authors : Surabhi Bhargava, Shubham Singhal
DOI :  10.14445/22312803/IJCTT-V72I11P107

How to Cite?

Surabhi Bhargava, Shubham Singhal, "Challenges, Solutions, and Best Practices in Post Deployment Monitoring of Machine Learning Models," International Journal of Computer Trends and Technology, vol. 72, no. 11, pp. 63-71, 2024. Crossref, https://doi.org/10.14445/22312803/IJCTT-V72I11P107

Abstract
In production environments, machine learning models often encounter data and operational conditions that differ significantly from the training environment. These differences lead to various challenges, such as data drift, concept drift, harmful feedback loops, adversarial attacks, model failures, and potential biases that may emerge in real-world applications. Model interpretability also becomes crucial in these environments, as understanding how models make decisions is necessary for debugging, trust-building, and mitigating any inadvertent biases that could lead to unfair outcomes. This paper explores these challenges in depth, presenting effective strategies for handling them. Drawing from industry practices and research insights, the paper outlines key solutions such as dynamic retraining, versioning, adversarial training, robust monitoring, and fairness-aware model evaluation to ensure continued model performance and equity post-deployment.

Keywords
MLOps, Data and Concept drift, Model integrity, Adversarial attacks, Feedback loop.

Reference

[1] Jie Lu et al., “Learning under Concept Drift: A Review,” IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 12, pp. 2346-2363, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Joao Gama et al., “A Survey on Concept Drift Adaptation,” ACM Computing Surveys (CSUR), vol. 46, no. 4, pp. 1-37, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Sungsoo Ray Hong et al., “Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs,” Proceedings of the ACM on Human-Computer Interaction, vol. 4, pp. 1-26, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Harini Suresh, and John V. Guttag, “A Framework for Understanding Unintended Consequences of Machine Learning Life Cycle,” EAAMO '21: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, Newyork, USA, pp. 1-9, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[5] David Carless, “Feedback Loops and the Longer-Term: Towards Feedback Spirals,” Assessment & Evaluation in Higher Education, vol. 44, no. 5, pp. 705-714, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Nicolò Pagan et al., “A Classification of Feedback Loops and their Relation to Biases in Automated Decision-Making Systems,” EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, Boston MA USA, pp. 1-14, 2023.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Rohan Taori, and Tatsunori Hashimoto, “Data Feedback Loops: Model-Driven Amplification of Dataset Biases,” Proceedings of the 40th International Conference on Machine Learning, pp. 33883-33920, 2023.
[Google Scholar] [Publisher Link]
[8] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy, “Explaining and Harnessing Adversarial Examples,” Proceedings of the International Conference on Learning Representations (ICLR), pp. 1-11, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Anirban Chakraborty et al., “A Survey on Adversarial Attacks and Defences,” CAAI Transactions on Intelligence Technology, vol. 6, no. 1, pp. 25-45, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Battista Biggio et al., “Evasion Attacks Against Machine Learning at Test Time,” Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2013, pp. 387-402, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Zhiyi Tian et al., “A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning,” ACM Computing Surveys, vol. 55, no. 8, pp. 1-35, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart, “Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures,” CCS '15: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver Colorado USA, pp. 1322-1333, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Andika Rachman, Tieling Zhang, and R.M. Chandima Ratnayake, “Applications of Machine Learning in Pipeline Integrity Management: A State-of-the-Art Review,” International Journal of Pressure Vessels and Piping, vol. 193, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, “Model-Agnostic Interpretability of Machine Learning,” arXiv, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Edoardo Mosca et al., “SHAP-Based Explanation Methods: A Review for NLP Interpretability,” Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, pp. 4593-4603, 2022.
[Google Scholar] [Publisher Link]
[16] Qinglong Zhang, Lu Rao, and Yubin Yang, “A Novel Visual Interpretability for Deep Neural Networks by Optimizing Activation Maps with Perturbation,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 4, pp. 3377-3384, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Ahmed Alqaraawi et al., “Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study,” IUI '20: Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari Italy, pp. 275-285, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Julia Moosbauer et al., “Explaining Hyperparameter Optimization via Partial Dependence Plots,” Advances in Neural Information Processing Systems, vol. 34, pp. 2280-2291, 2021.
[Google Scholar] [Publisher Link]
[19] Bobby Yan, Skyler Seto, and Nicholas Apostoloff, “FORML: Learning to Reweight Data for Fairness,” arXiv, pp. 1-9, 2022.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Shubham Sharma et al., “Data Augmentation for Discrimination Prevention and Bias Disambiguation,” AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Newyork, United States, pp. 358-364, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Ehsan Adeli et al., “Representation Learning with Statistical Independence to Mitigate Bias,” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Conference on Applications of Computer Vision (WACV), pp. 2513-2523, 2021.
[Google Scholar] [Publisher Link]
[22] Jongin Lim et al., “Bias-Adv: Bias-Adversarial Augmentation for Model Debiasing,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, (CVPR), pp. 3832-3841, 2023.
[Google Scholar] [Publisher Link]
[23] Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma, “Fairness-Aware Learning through Regularization Approach,” 2011 IEEE 11th International Conference on Data Mining Workshops, Vancouver, BC, Canada, pp. 643-650, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Tao Bai et al., “Recent Advances in Adversarial Training for Adversarial Robustness,” arXiv, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Ali Shafahi et al., “Adversarial Training for Free!,” Advances in Neural Information Processing Systems 32 (NeurIPS 2019), 2019.
[Google Scholar] [Publisher Link]
[26] Trung Ha et al., “Differential Privacy in Deep Learning: An Overview,” 2019 International Conference on Advanced Computing and Applications (ACOMP), Nha Trang, Vietnam, pp. 97-102, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Nicolas Papernot et al., “Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks,” 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, pp. 582-597, 2016.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Get started with Grafana and Prometheus, Grafana Labs. [Online]. Available: https://grafana.com/docs/grafana/latest/getting-started/get started-grafana-prometheus/
[29] Evidently AI, Collaborative AI Observability platform, Evaluate, Test, and Monitor your AI-Powered Products. [Online]. Available: https://www.evidentlyai.com/
[30] Arize AI, AI Observability and Evaluation Platform. [Online]. Available: https://arize.com/
[31] Fiddler AI, Enterprise AI Observability. [Online]. Available: https://www.fiddler.ai/
[32] Data and Model Quality Monitoring with Amazon SageMaker Model Monitor. [Online]. Available: https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html
[33] Machine Learning Operations (MLOps), Azure Monitor & MLOps. [Online]. Available: https://azure.microsoft.com/en-us/solutions/machine-learning-ops