Evaluating the Myths and Risks of AI in Software Development

Feb 14, 2025

In software development and so many professional disciplines, there’s simply no substitute for experience. As the use of AI continues to be a hot topic of conversation in our industry, we’ve found that experience is just as crucial an asset to separate the outsized potential of AI from the current reality.

That said, for all the discussion about AI the clearest truth is it’s a moving target. The technology is evolving quickly with new platforms and tools entering the market each day, each promising to revolutionize their marketplace. As always, the facts are much more nuanced.

However, by striking the right balance of caution and oversight, your business can streamline development by using AI. Here, we break down a few of the biggest myths surrounding the use of AI in software development along with the greatest risks you may not have expected.

 

The Two Biggest Myths in AI Software Development

The rapid evolution of AI in the previous year alone has shown it’s difficult to speak about the technology with 100% certainty. Improvements to large language models seemingly launch each day.

That said, Kitestring has found the following myths continue to have staying power.

AI Will Replace Software Development Teams

The idea of AI replacing skilled workers has been one of the most persistent fears about the technology since it arrived. But it fundamentally just isn’t true.

AI can write lines of code, but software development is so much more than coding. A skilled software engineer has to ask the right questions, and they have to understand what your stakeholders need. For whatever feature or tool a developer is coding, there’s a business aspect to its application and usability that AI simply can’t default to knowing.

To create useful software, you have to conduct an effective investigation into the business need you’re trying to resolve. For Kitestring, that effort often begins with working to thoroughly understand your specific customer. AI can assist with that type of research, but it’s likely to go on tangents that a developer needs the right experience to ignore.

AI is most useful as an assistant to branch out your thinking and drive further research. But when it comes to applying those insights, you need an experienced software developer to ensure all your code functions as it should. 

AI Will Give Developers Skills They Don’t Already Have

One of the most common use cases for AI is to act as an assistant who can help your team reach new heights. However, if your software developers don’t already have a high level of facility with a given programming language or software platform, AI is not going to level up their skills.

AI should at most be considered a junior-level developer to tackle repetitive, rudimentary tasks — and even then, it needs oversight. If a developer isn’t an expert in a specific discipline, they won’t be able to recognize when AI produces an obvious mistake.

In our experience, AI has helped our experienced software engineers parse a specific section of code. When effectively prompted, AI is useful for explaining what code means or why it was written a certain way. But again, AI is only being used to break down concepts to an engineer who is already very skilled. In terms of transforming a developer into someone they’re not, AI isn’t yet that powerful.

 

Risks to Consider When Incorporating AI into Development 

To adopt AI successfully, you need to incorporate a top-down strategy. Your teams need the right framework in place to use AI to accelerate the way they work because using the technology incorporates a level of risk. Here are just a few worth considering as you move forward:

Poor Code Quality

Even if your organization is using AI to handle simple tasks, your engineers need to understand what good code looks like. AI produces uneven results, and the code it produces needs to be checked at every step of the way. Otherwise, the software you’re developing simply won’t work as it should.

Difficulties in Reviewing AI-generated Code

This risk is especially prominent if yours is a really large organization and your code uses different patterns and architectures. AI simply won’t be able to follow all you have created up to now, because it hasn’t been trained on the way you write code. The results simply won’t reach the right standard on its own.

Escalating Technical Debt

If you use AI to write a specific function in your code, it may give you the right answer and work. However the results may not be understandable to every engineer because of the diminished quality of the code.

If no one on your development team understands a section of code, what it’s doing, or how it behaves, they’re not going to be able to modify it going forward. You’ll lack visibility into part of your software’s functionality. This adds technical debt that will have to be addressed in order to avoid impacting how your code is written in the future.

Security Risks and Exposures

Fundamentally, AI is outside software that you’re introducing to your code. You and your team may know how to use the technology, but you don’t know how it’s working under the hood. You can’t trust an AI agent to directly interact with your data without the proper guardrails in place. 

AI is prone to inaccuracies, also known as hallucinations. You don’t want your code subject to those hallucinations when it is integrating with your systems. It could easily do a lot of damage without the right oversight.

Possibility of Bias in Its Conclusions

AI models and LLMs include an inherent bias in that they’re programmed to please the user on the other side. If you ask it to consume your code, it might suggest various new directions that could be helpful. 

But if you present your code and say it was written by an AI program, the software will be much more strict and give you more recommendations for improvement. Understanding the right prompts and queries to deliver to AI to receive the strongest critique is crucial to receiving the best results.

Ethical Considerations

AI is a tool that promises to function with “intelligence.” But everything the tool knows is based on someone else’s work and their code. Ethical considerations about how AI has gained its source material vary from platform to platform, and your organization needs to have a strategy for addressing those concerns.

Before you investigate which AI tool to use, you have to understand how much risk you are comfortable with absorbing. Does your business have an issue if snippets of proprietary code get out? How core is the code your developers are working on to the core of your business? 

There’s always a risk of exposing your data to breaches or leaks when you send it outside your organization. Not all AI tools are created equal when it comes to ethical standards, which could lead to your data being sold or used to inform code used outside your organization. 

Possibilities of Training Data Manipulation

If you’re using an AI model, you don’t have visibility into how the data has been trained or how it’s working under the hood. If you feed it some of your organization’s code, it may seem like it’s working. But it could be behaving suspiciously in the background.

One of the biggest risks in AI is a lack of transparency in how the models are trained and whether it stores or shares your data. You need to evaluate your tolerance for those risks and investigate each platform accordingly.

 

Whether you’re aiming to integrate AI-powered software into your current system or develop a custom solution, Kitestring can help de-mystify the technology. Our AI Strategy Consulting Service will help your business prepare for an AI-driven future that’s right for your specific business needs.