After successfully annotating enough products, you are able to create and train a prediction model. The model will learn a mapping between some selected product attributes (the training attributes) and your new attribute. It will then be able to select the most likely attribute value for each product based on your annotations and the information available in the training attributes. This means an attribute value can be assigned not only to the products you didn't manually annotate, but to all the ones yet to arrive to your product feed.
Table of Contents:
The answer to this question depends heavily on the complexity of the attribute the model should learn, as well as on the diversity of your products and the informativeness of the selected training attributes. It is best to annotate more and more products iteratively and train new models until the desired performance is reached. There are, however, a couple of things you should keep in mind:
In the Annotation view, on the right-hand side, you see a text that says, "Currently there is no model" and a button to create a new model.
Clicking on the "Create Model" button opens a modal window. In this window you have to set the training attributes for the new model. An example window is depicted below.
You should select only the most informative attributes. These could be:
Remember, it is always best to start simple. Adding a training attribute which does not contain any useful information regarding the new attribute would only introduce unnecessary noise in the model input. Hence, the performance would most likely drop instead of improving with the addition of unneeded attributes. It is recommended to start with just a single attribute and add more iteratively until the models efficiency stops increasing. |
To find the attributes you can use the several filter settings. You can search for
All selected attributes are displayed as tags below the attribute list. With the aid of the "x" icon all attributes can be removed again.
|
If you click on the "Start" button the model creation process is triggered. You should see the following processing screen:
After the model was successfully created, a model card is shown. It contains the following elements:
If you are not happy with the prediction results of the model, it is possible to retrain it to get better results. Furthermore, for some model states (invalid and incompatible) a retraining is mandatory to use the attribute in your Product Guide.
The structure of the modal window is exactly the same as for the model creation. The only difference is that the selected attributes are the last used training attributes. After adapting your changes and clicking the "Start" button, the model will be retrained. Afterwards you can look at the performance stats and value predictions to see if the adaptations were worth it.
You cannot revert the model to an older state. That means after the retraining the old model status is gone for good. |
Your model can reach different state levels during its existence. Each level is worse than the upper one.
Furthermore, a state can only get worse for one or multiple levels, but not vice versa. The only level where it can reach upwards again is for the state 'valid.'
The annotation model is conformal with the annotated values and product data.
This state should always be the aim.
The model is still valid, but there are new annotation values which could change the model predictions.
You should consider retraining the model with the new information.
The model is invalid due to e.g. new/less synthetic attribute values or different attribute properties.
In this case a model retraining is mandatory. Otherwise, the attribute cannot be applied to the data respectively used in the Concept Board.
The model is incompatible due to e.g. lost training attributes in the data feed.
To use the created attribute in the Product Guide, the model has to be retrained.
In the model performance section, you see three metrics - the model accuracy, the number of matching predictions (same prediction as annotation) and the number of annotated products the model has been trained on. Note that the percentage of the last is determined based on the number of products currently in your product feed and can hence be over 100% if some products were removed.
Despite the potential scarcity of annotated products the best possible model will be trained. Therefore the annotated products are split in 5 different ways, each containing a training data set of 80% of the products and a test data set of the remaining 20%. The achieved model accuracy on each of these 5 splits is measured on both: the training and the test data sets. The results are presented with averaged values. Then, the model is trained on the entire data set containing all product annotations and present you the amount of matches and mismatches finally achieved.
Measuring model accuracy, however, is not a straightforward task. One way is to measure the percentage of the annotated attribute values the model predicted correctly. But imagine you have assigned the same attribute value to 80% of the products you annotated. Then a model predicting that attribute value independent of the product would be 80% accurate. Despite the high accuracy, that probably isn't a very useful model.
Instead, for each attribute value, the corresponding annotated products are considered and the percentage of these on which the model prediction was correct is measured. The final accuracy score is then the average over all attribute values. In the scenario above, considering there are 2 possible attribute values, the model would achieve 100% and 0% accuracy on these. This would result in a final accuracy of 50%, which better expresses that the above mentioned model is as well suited for determining the correct attribute value for each product as tossing a coin.