Wand has been selected by AWS out of more than 1,200 companies to participate in their Generative AI accelerator program!

Staging and Production

Staging and Production are Environments, which share similar properties, but serve different purposes. Staging is a place where you build the pipeline with the trained AI model for testing purposes. Staging can be used to try out the AI task with a small amounts of users, or on internal users, before moving it to production. Staging has the same properties as Production and thus enables you to simulate your real production runs. Staging cannot be created before you create a trained model in your Training Playground.

Production is a place where you build the pipeline with the trained AI model for solving your business task. Production environment cannot be created before you create the model in your training environment, but it can exist without a staging environment, which is optional.

Each run of Staging or Production generates predictions and additional analytics for your data. You can make predictions in real time, automate with the cron scheduler, or obtain predictions on demand by manually running the Pipeline.

Deployment to Staging

There are multiple ways to deploy an AI model to Staging. You can deploy an AI model by choosing it from a list of trained models on the Experiment page or deploy it from the results page of the corresponding training run. At the first time, deployment to Staging copies the whole Pipeline, including the trained AI model and the data sources. You have to reconfigure the data source blocks, and possibly the transformation blocks, to use data relevant for staging. In the second time, you can just switch the trained AI model itself from the experiment management page.

Deployment to Production

You can deploy your model to Production in the same way as you do it for Staging. However, we recommend to deploy your model to Production from Staging rather than directly from Training environment, in which case you may use the Pipeline, including data source and transformation blocks, validated in Staging. In this case, there is no need to reconfigure it unless you want to change your data source. When you deploy from Staging to Production, you can explicitly specify whether you want to copy the whole Pipeline, Cron Schedule and Report settings from Staging as well.

Deployment of AI model from Staging to Production

Both for Staging and Production, it is important that the structure of the data entering the Wand ML block remains compatible with the one for training. In particular, all column names and data types, except for the column with prediction key, must match column names and types used for training. Any additional column, including the one with the prediction key, are allowed in Staging and Production, but will be ignored by the AI model during prediction.

Both in Staging and Production, the Wand ML block in Pipeline cannot be reconfigured as it uses an already trained AI model. If you want to retrain or create a new model, you can go back to the Training Playground in the dropdown menu.

Results page

The results page contains a results table and aggregated box metrics for a given dataset. The results table contains predictions for each sample (i.e. each row of a given dataset), along with the AI model’s confidence and the quality of features. In the table of results, you have the option to add or hide columns from the original dataset. Additionally, you can choose which prediction columns to display or hide.

Choosing columns for the results table

Explainability

Explainability in machine learning refers to the ability of a model to provide explanations for its predictions. Our model provides two types of explainability: a local and a global one.

The local one explains the model’s prediction for each individual sample. By clicking the “i” symbol, you can open the local explainability interface for any row in the results table. To each feature of the sample, we assign importance which quantifies the contribution of this feature to the prediction. Positive importance (plotted in green) of a feature indicates that its value for the considered sample increases the probability of predicted class in classification tasks or increases the predicted numerical value in regression tasks. Negative importance (plotted in blue) implies the opposite.

The global explainability represents aggregate feature importance for the whole dataset on which predictions are done. Unlike for individual samples, global feature importance is always nonnegative and tells how much in average the corresponding feature was used by the model for making predictions. The global explainability interface is accessible by clicking the global explainability button on the result table.

Local and global explainability

Scheduled Run (Cron)

Cron allows you to set up automatic runs of Staging and Production environments at specified times and frequencies. It will automatically acquire new data from the connected data source(s), execute the Pipeline and store the results, which you can look up later on the results page of the corresponding environment.

Setting up Cron