Last updated: November 2022
- Integration improvements & bug fixes - this version has many improvements related to the integration process, so it is now easier to integrate with Superwise
- Datasets improvements - We made some touchups to the platform, including performance improvements and modifications to the analytics screen to support the datasets' concept.
- "Compare to dataset" - we have replaced the "compare to baseline" to fit the new dataset concept, as well as removed the option from non-drift metrics.
- Removed metrics - Due to recent platform updates, we have deprecated Outliers and New Values metrics. Drift metric is not an out-of-the-box metric anymore, it should be configured by the user, selecting the relevant reference dataset and function.
*Note - we have deprecated policy templates from this version onwards due to the new upcoming policy builder. Stay tuned...!
- Custom drift metrics - You can now define your drift metric based on the dataset that will act as a reference (e.g., drift compared to a training dataset). See documentation.
- Empty transaction alert - When logging in production data to supervise, you can easily distinguish between an error on the transaction level, to a file/record that arrived empty with a yellow mark on the data ingestion screen. See documentation.
Scaling is crucial, especially when monitoring your production multi-model ML pipelines. It has never been as easier to manage the observability and monitoring of many models in production with the new capabilities of the latest version.
- Introducing datasets - Dataset acts as a baseline (historical reference) describing a model's behavior and contains the model's inputs, outputs, and labels (optional). Training or validation datasets are often initially used as baselines and allow different drift metric configurations. See documentation.
- Cross-project entities - You can now match similar entities/features across different models/versions/datasets across the project to be able to monitor them all under the same policy, to get an alert when a specific entity misbehaves in several models at once. See documentation.
- Cross-project segmentation - Using matching entities, it is now easier to apply segments across different models within a project with one configuration. This way, the same segment can be defined on many models at once and monitored accordingly. See documentation.
- Create notification channels via SDK - It is now possible to create notification channels using our SDK so that it will become part of the pipeline's automation. See documentation.
Segments are now associated to a Project instead of a Model:
Segmentobject contains project_id and not model_id
SegmentDefinitionENUM now use entity_id and not entity_name
- Collecting data from GCS using cloud function - You can now build a fully integrated pipeline with Superwise using Google cloud storage and Google cloud function. See documentation.
- Performance metric model selection - You can select the model on which a performance metric is being applied upon configuration. See documentation.
- On-prem deployment improvements - It's now easier to deploy Superwise on-prem with our lightweight, state-of-the-art on-prem deployment. See documentation.
- General bug fixes
Bug fixes and performance improvements.
When things are clear and transparent, teams thrive, so we have released this version focused on the visibility of what happened and when, so you can easily understand what happened at any time.
- Improved Incidents screen - Incidents are the heart of our platform; this is the place where you reveal what went wrong, where, and when. It is now easier than ever to investigate and get to the root cause of each event or anomaly. See documentation
- Audit logs via API - You can now consume audit logs using API. See documentation.
- Projects - It's now easier to support large-scale deployments by uniting a group of models based on a common denominator. See our Observability projects blog post.
- Shared views - You can now create different views within the metrics screen and share them across the organization to act as pre-defined, cross-teams dashboards.
- Monitoring delay control - When setting a monitoring policy in place, you can now set a monitoring delay (e.g., labels arrive after 3 months, so monitoring performance should have 3 months delay.)
- Multi-segments view - You can now select and compare different segments on the same graphs in the metrics screen. This allows for a more thorough, in-depth investigation and analysis of different metrics to understand metrics behaviors on a sub-population level that couldn't be found when looking at the entire data.
- Audit logs for all changes made (create/update/delete policies-incident-model-versions) - You can easily track certain default events based on your teams' actions within the Superwise platform.
- Automatic email integration channel creation - Once a team member joins your organization, a new integration channel (for anomaly detection alerting purposes) will automatically be created under their email so that they can add it to the alerting mechanism of each policy. See documentation.
Bug fixes and performance improvements
- No-code integration - Onboarding and scaling are now easier with the ability to upload a model version or ongoing production data straight from the UI using a CSV file. See our No-code observability blog post.
- Transactions metadata - Easily understand your data ingestion pipelines by clicking on them and getting all relevant metadata of each transaction in the data ingestion screen. See documentation.
- Ability to define the normal period - When configuring a monitoring policy, you can now set the timeframe from which Superwise will learn the normal boundaries (e.g., learn from all history or start learning only from The beginning of this quarter). See documentation.
- notifications on failed transactions - Understand the health of your data ingestion pipeline with the ability to get notified when files transactions fail. See documentation.
- Model tags - You can now apply tags (badges) on each model to filter/group different models within the project for any reason (e.g., mark different models according to the various teams, applications, locations, tasks, etc.).
- Enhanced segment creation - added more operators for more robust sub-population definitions. See documentation.
- MLflow integration - MLflow users can now easily enrich MLflow with Superwise metrics and better understand the differences in model performance. See our MLFlow integration blog post.
- Sorting and filters capabilities to models screen - It's easier to stay focused and control the models screen by sorting and filtering the models list.
- Ability to compute performance metrics per segment - We've enhanced our performance analysis capabilities to analyze it on a sub-population level. See Documentation.
- Policy templates - Out of thousands of policies, we have created a list of pre-configured templates to streamline the monitoring configuration process and help you apply industry best practices to your models. Click on the "Add monitoring policy" button to get started.
- Access advanced API-based capabilities using Superwise's Notebooks - real-life examples of how to export data from the platform to better plan your re-training and compare model behavior using Superwise's API - see our community GitHub repository.
- Streamlined onboarding - We have added guidance and getting started for you to quickly onboard new team members, as well as a complete getting started guide - either a quick-start recipe or example demo notebook (run end-to-end integration with example data)
- Full support suite - Includes: Chat with support, Meet with an expert, Open a ticket, Documentation, Quickstart, Example model notebook
- Ability to copy models/versions ID from within the UI (home-screen)
And more: Slack integration support, New-relic & Datadog integration, Sagify integration, Stabilized transactions screen.