I have built a model that scores 99.9% accuracy! Great! Fantastic!
This is what a colleague of mine calls the "Now what?" effect. After training, testing, and optimizing a model repeatedly, we get this fantastic performance on the evaluation set. Now it is the time to put your model to good use on real life, maybe streaming, data. This phase is called Model Deployment.
Usually, a deployment-dedicated workflow reads the incoming new data, applies the previously trained, evaluated, and optimized model, and produces the expected response.
An example: Churn Prediction
In churn prediction, we train a model to predict the probability of each customer to churn, based on its demographics, habits, loyalty, and general history with the company and company's products. We then have this great model sitting somewhere on our machines.
Now, a customer calls the call center for whatever reason. The agent pulls off his/her data from the database, adds a few information that the customer is giving during the call, and presses a button to activate the deployment workflow.
The deployment workflow reads the customer's data, integrates them with the current call's data, interrogates the model and produces the likelihood that this customer has to churn, in the shape of a score. Based on this score, the agent follows a different path in customer support.
As advice for best practice in the implementation of deployment workflows is the adoption of PMML models. In the deployment workflow, you just need a PMML Predictor or a JPMML Classifier node to interrogate the model, for whatever model. Indeed a PMML interpreter node is capable of identifying the model type and call the appropriate predictor.
If you use a PMML Predictor node (or a JPMML CLassifier node) in the deployment workflow, you do not need to update the workflow, when you change the model. Indeed, if for the past month you have used a decision tree and you now want to switch to a neural network, a deployment model using a PMML interpreter node does not need any update.