Superwise supports any tasks that involve tabular data. This includes supervised and non-supervised learning, classification, regression, and more. We currently do not support array datatypes, so recommendations and embeddings are also not supported.
Superwise provides a list of out-of-the-box metrics, including everything from min and max values to mean and standard deviation, alongside distribution shifts and different kinds of drift. You can find the entire collection on the Metrics page by clicking the Metrics button. Read more here to get additional information and find out how to create your metrics!
Labels enable monitoring metrics such as performance metrics, but other metrics, such as drift, missing values, mean value, and many more, are available for you to monitor even if you don't send label data. for more information [Read here](doc: metric)
You can read our Drift metrics documentation to understand how Superwise calculates the drift metrics.
It's up to you! You can specify the threshold manually, but we also supply automatic time series anomaly detection. This detection learns the predicted values based on historical data and will alert you when something goes wrong. Learn more…
Superwise can automatically infer thresholds using a heuristic that predicts anomalies based on values at the +-99 percentiles on each side of the distribution, with seasonal-based control limits. Learn more…
Yes, as long as there is no data of any other version logged into Superwise during these dates.
We currently don't save raw data but we do save aggregative data, so there is no retention policy.
The new monitoring policy will begin running according to what you specified when you created it. By default, all policies run every day at 4 pm.
You can also run it immediately by using the API Rout:
Link to API docs
No, this is not possible. Once you send a prediction to Superwise you can no longer modify the data.
Every incident is automatically closed once its measured values go back to being within the control limits.
In order to have a label drift metric for a multi class label distribution:
- Configure a new drift metric , you will need to choose on which fields you want to compute the drift metric, in your case choose the relevant label field.
- Choose the reference dataset you want to compute drift from
- Our distribution distance metric knows how to compute distance between two distributions for numeric or categorical (binary or not) fields.
You can actually do this in two ways. Either overwrite the previous proxy label performance metric each time until the real label arrives or save multiple proxy and label performance metrics side by side.
Updated 7 months ago