Overcoming Challenges in Applying AI Guidance to Complex and Legacy Codebases

Authors

  • Mikita Piastou Full-Stack Developer, Emplifi, Calgary, AB Canada

Keywords:

AI guidance, legacy codebases, code complexity analysis, software maintenance, code metrics, AI integration

Abstract

This paper represents an investigation into the challenges posed by applying AI guidance to complex and legacy codebases. Various AI models were assessed and tuned with a view to improving their effectiveness for the analysis and guidance of legacy code. Our approach was to deeply analyze five diverse codebases for code complexity, capturing metrics including - but not limited to - functions, classes, method calls, and much more. Python was used for simulation and fine-tuning, with model fine-tuning via TensorFlow/Keras. We fine-tuned a pre-trained AI model so that it would have closer characteristics to the nature of the legacy code. The resulting fine-tuned model was then tested, and the results had an accuracy of 84% with a performance overhead of 45%. Our results depict the effect of AI tools on performance and also contrast the scenarios with and without AI guidance. Visualizations of performance overhead and accuracy metrics showed several of these trade-offs and can help stakeholders understand the value creation and the cost incurred by AI. The study highlights several lessons that could be used for the optimization of AI tools to work with complex codebases and provides guiding principles for the effective application of AI in software maintenance and improvement.

Downloads

Download data is not yet available.

Downloads

Published

17-04-2024

How to Cite

[1]
“Overcoming Challenges in Applying AI Guidance to Complex and Legacy Codebases”, J. of Art. Int. Research, vol. 4, no. 1, pp. 312–331, Apr. 2024, Accessed: Mar. 07, 2026. [Online]. Available: https://www.thesciencebrigade.org/JAIR/article/view/325