Assessment of Technical Information Quality using Machine Learning |
||
|
|
|
© 2023 by IJCTT Journal | ||
Volume-71 Issue-9 |
||
Year of Publication : 2023 | ||
Authors : Arvind Kumar Bhardwaj, Sandeep Rangineni, Divya Marupaka | ||
DOI : 10.14445/22312803/IJCTT-V71I9P105 |
How to Cite?
Arvind Kumar Bhardwaj, Sandeep Rangineni, Divya Marupaka, "Assessment of Technical Information Quality using Machine Learning ," International Journal of Computer Trends and Technology, vol. 71, no. 9, pp. 33-40, 2023. Crossref, https://doi.org/10.14445/22312803/IJCTT-V71I9P105
Abstract
Even specialists sometimes do not comprehend the reasoning behind the choices made by the most advanced ML systems, making them opaque to end-users in high-stakes fields like medical diagnosis, financial decision-making, and others. Because of this, there has been a rise in attention paid to the problem of explaining ML, both in the academic world and in the fields where it is really useful. From a survey of explanatory theories, we isolate some characteristics. Metrics used for assessments are aimed at achieving the defined qualities of explainability. Developing a set of assessment measures that can be used across all available explanation approaches is impossible. Software's prevalence in consumer goods and services and its complexity have both been on the increase in recent years. As our reliance on software grows, so does the significance of monitoring, improving, and enhancing its quality. In order to monitor and manage different aspects of software systems, software metrics provide a quantifiable technique for doing so. The challenge of predicting software quality may be recast as one of categorization or concept learning within the framework of machine learning. In this study, we provide the groundwork for using machine learning techniques in big software companies for evaluating and forecasting product quality. We also provide evidence that machine learning techniques may be useful in this context. Some objective measures for evaluating image quality are hard and time-consuming to calculate because they rely on explicit modelling of the extremely non-linear nature of human perception. Even though ML-based techniques for visual quality evaluation have been shown to work in a number of studies, the general reliability of these paradigms remains unclear due to their susceptibility to overfitting. A thorough familiarity with the benefits and drawbacks that define learning machines is necessary before attempting to use ML to model perceptual systems. The best procedures are shown and exemplified in this work.
Keywords
Data analysis, Data preparation, Machine learning, Data collection, Visual quality, Software quality.
Reference
[1] Fu-Huei Lin, and Russell M. Mersereau, “Rate–Quality Tradeoff MPEG Video Encoder,” Signal Processing: Image Communication, vol. 14, no. 4, pp. 297–309, 1999.
[CrossRef] [Google Scholar] [Publisher Link]
[2] P. Gastaldo, S. Rovetta, and R. Zunino, “Objective Quality Assessment of MPEG-2 Video Streams by Using CBP Neural Networks,” IEEE Transactions on Neural Networks, vol. 13, no. 4, pp. 939–947, 2002.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Susu Yao et al., “Video Quality Assessment Using Neural Network Based on Multi-Feature Extraction,” Visual Communications and Image Processing 2003, vol. 5150, 2003.
[CrossRef] [Google Scholar] [Publisher Link]
[4] P. Le Callet, C. Viard-Gaudin, and D. Barba, “A Convolutional Neural Network Approach for Objective Video Quality Assessment,” IEEE Transactions on Neural Networks, vol. 17, no. 5, pp. 1316–1327, 2006.
[CrossRef] [Google Scholar] [Publisher Link]
[5] S. Kanumuri et al., “Modeling Packet-Loss Visibility in MPEG-2 Video,” IEEE Transactions on Multimedia, vol. 8, no. 2, pp. 341–355, 2006.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Hasnaa El Khattabi, Ahmed Tamtaoui, and Driss Aboutajdine, “Video Quality Assessment Measure with a Neural Network,” International Journal of Computer and Information Technology, vol. 4, no. 3, pp. 167–171, 2010.
[Google Scholar]
[7] Manish Narwaria, Weisi Lin, and Anmin Liu, “Low-Complexity Video Quality Assessment Using Temporal Quality Variations,” IEEE Transactions on Multimedia, vol. 14, no. 3, pp. 525–535, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Nicolas Staelens et al., “Constructing a No-Reference H.264/AVC Bitstream-Based Video Quality Metric Using Genetic ProgrammingBased Symbolic Regression,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 8, pp. 1322–1333, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Paolo Gastaldo, and Rodolfo Zunino, “Neural Networks for the No-Reference Assessment of Perceived Quality,” SPIE Journal of Electronic Imaging, vol. 14, no. 3, 2005.
[CrossRef] [Google Scholar] [Publisher Link]
[10] S. Suresh, R. Venkatesh Babu, and H.J. Kim, “No-Reference Image Quality Assessment Using Modified Extreme Learning Machine Classifier,” Applied Soft Computing, vol. 9, no. 2, pp. 541–552, 2009.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Manish Narwaria, and Weisi Lin, “Objective Image Quality Assessment Based on Support Vector Regression,” IEEE Transactions on Neural Networks, vol. 21, no. 3, pp. 515–519, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Judith A. Redi et al., “Color Distribution Information for the Reduced-Reference Assessment of Perceived Image Quality,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 12, pp. 1757–1769, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[13] Hantao Liu et al., “An Efficient Neural-Network Based No-Reference Approach to an Overall Quality Metric for JPEG and JPEG2000 Compressed Images,” SPIE Journal of Electronic Imaging, vol. 20, no. 4, pp. 1–15, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Chaofeng Li, Alan Conrad Bovik, and Xiaojun Wu, “Blind Image Quality Assessment Using a General Regression Neural Network,” IEEE Transactions on Neural Networks, vol. 22, no. 5, pp. 793–799, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Anush Krishna Moorthy, and Alan Conrad Bovik, “Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality,” IEEE Transactions on Image Processing, vol. 20, no. 12, pp. 3350–3364, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Huixuan Tang, Neel Joshi, and Ashish Kapoor, “Learning a Blind Measure of Perceptual Image Quality,” IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, pp. 305-312, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Sergio Decherchi et al., “Circular-ELM for the Reduced-Reference Assessment of Perceived Image Quality,” Neurocomputing, vol. 102, pp. 78–89, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Christophe Charrier, Olivier Lézoray, and Gilles Lebrun, “Machine Learning to Design Full-Reference Image Quality Assessment Algorithm,” Signal Processing: Image Communication, vol. 27, no. 3, pp. 209–219, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Manish Narwaria, Weisi Lin, and A. Enis Cetin, “Scalable Image Quality Assessment with 2D Mel-Cepstrum and Machine Learning Approach,” Pattern Recognition, vol. 45, no. 1, pp. 299–313, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Manish Narwaria, and Weisi Lin, “SVD-Based Quality Metric for Image and Video Using Machine Learning,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 2, pp. 347–364, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Thiede Thilo et al., “PEAQ - The ITU Standard for Objective Measurement of Perceived Audio Quality,” Audio Engineering Society, vol. 48, no. 1/2, pp. 3–29, 2000.
[Google Scholar] [Publisher Link]
[22] Nicu Sebe et al., Machine Learning in Computer Vision, Springer, 2005.
[Google Scholar] [Publisher Link]
[23] Giovanni Da San Martino, and Alessandro Sperduti, “Mining Structured Data,” IEEE Computational Intelligence Magazine, vol. 5, no. 1, pp. 42–49, 2010.
[CrossRef] [Google Scholar] [Publisher Link]
[24] J.C. Rajapakse, Y-Q. Zhang, and G.B. Fogel, “Computational Intelligence Approaches in Computational Biology and Bioinformatics,” IEEE/ACM Transactions on Computational Biology and Bionformatics, vol. 4, 2007.
[Google Scholar]
[25] Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
[Google Scholar] [Publisher Link]
[26] Isabelle Guyon, and Andre Elisseeff, “An Introduction to Variable and Feature Selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–11822, 2003.
[Google Scholar] [Publisher Link]
[27] Aniek F. Markus, Jan A. Kors, and Peter R. Rijnbeek, “The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies,” Journal of Biomedical Informatics, vol. 113, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[28] Vivek Basavegowda Ramu, “Performance Testing using Machine Learning,” SSRG International Journal of Computer Science and Engineering, vol. 10, no. 6, pp. 36-42, 2023.
[CrossRef] [Publisher Link]
[29] Vijay Arya et al., “One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques,” arXiv, 2019.
[CrossRef] [Google Scholar] [Publisher Link]