MLflow integration

374

MLflow and Superwise are two powerful MLOps platforms that assist in the process of managing ML models’ training, monitoring, and logging. The two systems have different capabilities: MLFlow Experiments is mostly good for tracking metrics while Superwise offers a more deep and comprehensive analysis of your models and data. This guid describes the concepts of Integrating MLFlow with Superwise. Check also this end-to-end tutorial notebook.

Integrate Superwise with MLFlow

Superwise offers a simple way to integrate with MLflow. To establish the integration you just need to match 2 types of parameters between the two systems:

  1. Matching the names of the models.
# Setting up global names
model_name = "Diamond Model"
 
# Using the model name in MLflow’s experiment
mlflow.set_experiment(f"/Users/{databricks_username}/{model_name}")
 
# Using the same model name to create a Superwise model
from superwise.models.model import Model
sw_model = Model(
     name=model_name,
     description="..."
 )
  1. Setting MLflow Tags to match versions with Superwise.
superwise_version_name = "version_1"
 
tags = {"Superwise_model": model_name,
  "Superwise_version": superwise_version_name}
mlflow_run = mlflow.start_run(tags=tags)

πŸ“˜

In your MLflow experiment tracking dashboard the tags and metrics will appear, allowing you to both identify the models and versions as well as benefit from new types of metrics.

Demo notebook

In our demo notebook we build a simple ML pipeline to predict the price of Diamonds based on a set of numeric and categorical features.
The notebook tells the story of a machine learning experiment that went bad due to data corruption.
An ML team was working on the Diamonds dataset and trained their first model. They logged the training data and the predictions to Superwise and used MLflow to track the experiments.