Research Article | Open Access | Download PDF
Volume 4 | Issue 4 | Year 2013 | Article Id. IJCTT-V4I4P105 | DOI : https://doi.org/10.14445/22312803/IJCTT-V4I4P105
Algorithms for Computer Aided Diagnosis – An Overview
Dr.A.Padmapriya, K.Silamboli Chella Maragatham
Citation :
Dr.A.Padmapriya, K.Silamboli Chella Maragatham, "Algorithms for Computer Aided Diagnosis – An Overview," International Journal of Computer Trends and Technology (IJCTT), vol. 4, no. 4, pp. 472-478, 2013. Crossref, https://doi.org/10.14445/22312803/ IJCTT-V4I4P105
Abstract
In medicine, two types of resources are becoming widely used: the Content-based Image Retrieval (CBIR) and the Computer- Aided Diagnosis (CAD) systems. The purpose of CAD is to increase the accuracy of diagnosis, as well as to improve the consistency of image interpretation by using the computer results as a second opinion. Similar to CAD systems, CBIR uses information extracted from images to represent them. However, the main purpose of a CBIR system is to retrieve ‘‘cases” or images similar to a given one. Analyzing past similar cases and their reports can improve the radiologist’s confidence on elaborating a new image report, besides making the training and the diagnosing process faster. Moreover, CAD and CBIR systems are very useful in medicine teaching. Currently, image mining has been focused by many researchers in data mining and information retrieval fields and has achieved prominent results. A major challenge in the image mining field is to effectively relate low-level features (automatically extracted from image pixels) to high-level semantics based on the human perception. Association rules has been successfully applied to other research areas, e.g. business, and can reveal interesting patterns relating low-level and high-level image data as well. In this work, association rules are employed to support both CAD and CBIR systems.
Keywords
Image mining, Association rules, Classification, Prediction.
References
[1] R. Agrawal, T. Imielinski, and A. N. Swami, ―Mining association rules between sets of items in large databases, in Proc. 1993 ACMSIGMOD Int. Conf. Manage. Data - SIGMOD 93 SIGMOD 93, Washington, DC, 1993, pp. 207–216.
[2] M. Dash and H. Liu. Feature selection for classification. Intelligent Data Analysis — An International Journal, 1(3), 1997. http://www.public.asu.edu/_huanliu/papers/ida97.ps. 10, 12, 13, 14
[3] Juzhen Z. Dong, Ning Chong, and Setsuo Ohsuga. Using rough sets with heuristicsto feature selection. In Ning Zhong, Andrzej Skowron, and Setsuo Ohsuga, editors, Proceedings of the 7th International Workshop on New Directions in Rough Sets, Data Mining, and Granular-Soft Computing (RSFDGrC-99), volume 1711 of Lecture Notes inArtificial Intelligence, pages 178–187, Berlin, November 9–11 1999. Springer. 11, 21, 42.
[4] Rich Caruana and Dayne Freitag. Greedy attribute selection. In Proceedings of the 11thInternational Conference on Machine Learning, pages 28–36. Morgan Kaufmann, 1994. 10, 12, 14, 42, 43, 44.
[5] K. Kira and L. A. Rendell. The feature selection problem: Traditional methods and a new algorithm. In Proceedings of the Ninth National Conference on Artificial Intelligence, pages 129134. AAAI Press, 1992. 10
[6] George H. John, Ron Kohavi, and Karl Pfleger. Irrelevant features and the subset selection problem. In Proceedings of ICML-94, the Eleventh International Conference on Machine Learning, pages 121–129, New Brunswick, USA, 1994. 10 48
[7] Daphne Koller and Mehran Sahami. Toward optimal feature selection. In Proceedings of ICML-96, the Thirteenth International Conference on Machine Learning, pages 284–292, Bari, Italy, 1996. 10
[8] P. M. Narendra and K. Fukunaga. A branch and bound algorithm for feature subset selection. IEEE Transactions on Computers, 26:917–922, 1977. 10Z. Pawlak: Rough sets, International Journal of Computer and Information Sciences, 11, 341-356, 1982.
[9] Wang, C., Tjortjis, C., PRICES: An Efficient Algorithm for Mining Association Rules. [10] Agarwal, R. Aggarwal, C. and Prasad V., A tree projection algorithm for generation of frequent itemsets. In J. Parallel and Distributed Computing, 2000.
[11] K. Kira and L. A. Rendell. A practical approach to feature selection. In MachineLearning: Proceedings of the Ninth International Conference, 1992.
[12] M.X. Ribeiro, A.J.M. Traina, C.T. Jr., N.A. Rosa, P.M.A. Marques, How to improve medical image diagnosis through association rules: The idea method, in:The 21th IEEE International Symposium on Computer-Based Medical Systems, Jyvaskyla, Finland, 2008, pp. 266–271.
[13] R. Agrawal, R. Srikant, Fast algorithms for mining association rules, in: Intl. Conf. on VLDB, Santiago de Chile, Chile, 1994, pp. 487–499.
[14] N. Cristianini and J. Shawe-Taylor. Support Vector machines. Cambridge Press, 2000.
[15] J. Cervantes, Xiaoou Li, and Wen Yu, “SVM Classification for Large Data Sets by Considering Models of Classes Distribution,” Sixth Mexican International Conference on Artificial Intelligence - Special Session, Pp. 51 – 60, MICAI 2007. [16] Aha, D. (1992). Tolerating noisy, irrelevant, and novel attributes in instance-based learning algorithms. International Journal of Man-Machine Studies, 36(2), 267–287.
[17] Bay, S. D. (1999). Nearest neighbor classification from multiple feature subsets. IntelligenData Analysis, 3(3), 191–209.
[18] Han, J. and Pei, J. 2000. Mining frequent patterns by patterngrowth: methodology andimplications. ACM SIGKDD Explorations Newsletter 2, 2, 14-20.
[19] Tien Dung Do, Siu Cheung Hui, Alvis Fong, Mining Frequent Itemsets with Category-Based Constraints, Lecture Notes in Computer Science, Volume 2843, 2003, pp. 76 – 86
[20] Content based analysis (Hayes, 1990), Association Analysis, Categorization and Prediction (Han,2001), Outlier Analysis, Evolution Analysis (Lewis, 1990).
[21] Austin, M. P. 2002. Spatial prediction of species distribution: an interface between ecological theory and statistical modelling. / Ecol. Modell. 157: 101_/118. Austin, M. P. and Cunningham, R. B. 1981. Obse.
[22] Rich Caruana and Dayne Freitag. Greedy attribute selection. In Proceedings of the 11th International Conference on Machine Learning, pages 28–36. Morgan Kaufmann, 1994. 10, 12, 14, 42, 43, 44
[23] Hussein Almuallim and Thomas G. Dietterich. Learning Boolean concepts in the presence of many irrelevant features. Artificial Intelligence, 69(1–2):279–305, 1994. 12, 14, 43