It is important to note that your dataset file should follow the following:
- The sum of the size of all baseline data files together should be up to 100MB
- The dataset must contain columns for ID and Timestamp (see entities)
- The first row should contain the feature/entity name
Here's an example of what a dataset should look like, where
- Features are ׳Device׳, ׳Age׳ and ׳Gender׳ are features
- Prediction is: ׳is_fraud׳
- Label (ground truth) is: ׳is_fraudlant׳
- ID is: ׳ID׳
- Timestamp is: ׳Timestamp׳ (supported type: yyyy-mm-dd hh:mm:ss.SSS)
The most common use cases where data ingestion failures occur when the schema doesn't match. Make sure you use the same schema used in the uploaded dataset.
Superwise lets you track and monitor your logged data transactions. You can also track any failed transactions by setting an alert mechanism that will send an appropriate notification message to one of the suggested integration channels.
Choose how many failed transactions should occur in the selected time period before you will be notified.
Configure the alert mechanism to match your data sending method. If you send data in stream (records), getting alerts for a certain number of failed transactions may be less relevant than when you send a data file.
You can keep track of your failed transactions in the notification log. Another option is to select
Show in list to filter the relevant failed transactions.
We don't keep the raw data available (for security reasons), but use it only for aggregation and metric calculations purposes.
Analysis for segments starts from the moment you create them and are not applied retrospectively to historical data.
So that from the moment of creation until relevant production data is logged into Superwise, you will see 0 as the number of predictions under that segment.
There are several optional reasons for such behavior:
Updated 1 day ago