[Theory, Code] Attaining dissimilar & inconsistent results with Avalanche libraries #705
-
Continuing on this discussion, we were experimenting with the impact of task orders in imbalanced datasets and tried to implement the same setup using avalanche. For a quick recap, {'1': [0,1,2], '2': [3,4,5], '3': [6,7,8], '4': [9,10,11], '5': [12,13,14]} here 0,1,2 etc., are classes. We then used avalanche's tensor_scenario to declare the Avalanche dataset. [[750123, 0],
[184023, 0]] Where true positives, false positives are coming as 0s. Amazed by seeing this, we then tried to redefine tasks such that all the classes are placed in a single task (Normal ML scenario where all the data is seen at once), i.e., task order = {[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]}. Ideally, we would expect a good confusion matrix as this is a simple Machine learning task (as we sent all the data at once to the model). Nevertheless, again, we see the same thing happening there, i.e., the model is not recognizing any of the samples as a complete code is on github. Please let us know if you need any clarity on the implementation. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
There may be a bug on the confusion matrix. @AndreaCossu can you confirm it? |
Beta Was this translation helpful? Give feedback.
There may be a bug on the confusion matrix. @AndreaCossu can you confirm it?
The relevan issue is #703.