Why Machine Learning Models Fail… and How to Succeed

This post originally appeared on the Atrium blog. Authored by SAKET CHATURVEDI.

According to Gartner, only 53% of machine learning models convert from prototype to production.

The statement above can also be articulated like this: 47% of machine learning projects fail to go into production. If an organization is set out to implement a machine learning solution in the first place, there can be a big chance that the implementation will fail.

Reasons for Failure


Machine learning is not magic. It works with the problem to be solved as a target and everything is built around it. If there is not a defined goal and a problem that needs to be solved, no amount of data analysis and modeling work can make the project a success.

Having a good understanding of the organization’s data maturity can help in identifying if the problem at hand actually needs a machine learning solution or can be solved by a simpler analytics solution.

Understanding that machine learning is an iterative process, it is important to have a success metric that defines when to stop. If there is no well-defined success criteria, the project will keep on going in loops.

Solution: Understand the Why

It will help in defining the goal and create an understanding between the business and the data scientist, enabling them to be on the same page and work towards a feasible solution.

There are chances that the problem can be solved by an analytics solution and instead of a machine learning algorithm. When defining a business problem, the next part is also to work on defining the success criteria to make sure that the project is focused and has a fixed target to achieve.


Machine learning models are as good as the data. For example, if we train a model to identify oranges from an image, and the data we have consists of photos of apples, the model will not know what an orange looks like. Such a model will not provide good results.f the problem statement is defined, there are factors that can impact the model training process, such as:

  • Availability of the data required for the problem
  • Data quality
  • Relevance to the problem
  • Data bias

Provided that we get the most relevant data to the problem, still, the majority of time is spent in data quality and cleansing work. The data required for training a machine learning model has steep quality requirements. If we are passing low-quality data during training it would result in a bad model, which would also result in inaccurate predictions. Moreover, even if the training process is streamlined to change the quality of the data, because of the way business works or the way the data capture processes work, the model could encounter bad data during the prediction process. The data we have for training is a representation of the data that the model will encounter when deployed in production.

Solution: Educate and Communicate

It is important to incorporate data quality standards in the project and work with the customer to identify and eliminate the root cause of the bad data problem.

Model Generalization

This can be a problem if the training data does not represent the actual production data. If you train a model on a strict dataset and it encounters a data point that it has never seen before, it will fail. There are two terms that we hear whenever we are talking to a data scientist: overfitting and underfitting.

Overfitting is when your model fits the training data too well and it fails to work with unseen data. One reason for this is using a method that is too complicated for a relatively simpler problem.

Underfitting is when your model is not able to identify any meaningful patterns in the data that you passed. One reason for this is using a method that is too simple for the problem at hand, or because you aren’t using the right data in your model.

Solution: Have a Good Model Training and Validation Process

Another approach is to use validation methods while training the data. There are various methods that can be leveraged, such as using resampling methods (cross validation) and following the train/test/validation set strategy where the validation set is used at the end of the training cycle and the training and testing datasets are used for training and tuning the model.

The “Proof of Concept”

With a project that has limitations, organizations try to validate if the project would work in a larger setting. The problem here is in the identification of the risks that we could encounter when implementing the project on a larger scale.

The training data that we used might be good enough for the POC, but when we look at the larger-scale data, multiple data sources may need to be joined, aggregated, or transformed. It could be possible that the data is not fit for the problem we wanted to solve or is not complete. Data science projects take time and include experimentation to identify what the best method is for the problem. Trying to replicate it in a smaller time frame and expecting high accuracy and insights from it increases the chances of failure, particularly if proper expectations are not set.

Solution: POCs are Not Bad, but the Execution and Expectations Are

Setting up realistic expectations with customers can go a long way. Before starting a POC, set the right expectations and evaluation criteria, as there is always room for improvement and the possibility of finding new insights when more data is incorporated.

Fitting a Model and Getting the Predictions

Failure comes when the utilization of the predictions is not clearly articulated and the users don’t know what to make of the output. It is important to develop methods to deliver the results in the right way to the end users and educate them on the meaning.

Solution: Begin with the End

That said, engage with the functional users regarding their point of view, decide on the change management strategy with the stakeholders, and create a plan for end user training about the predictions and what they mean. At the end of the day, this step is just as important as the data and models.

Learn about Elevate and how we can help you succeed with your machine learning models.

As the market leader in intelligent solutions, we help organizations make smarter decisions and act on them. Learn more at atrium.ai