Download as PDF
gut | left palm | right palm | tongue | Overall Accuracy | |
---|---|---|---|---|---|
gut | 1.0 | 0.0 | 0.0 | 0.0 | |
left palm | 0.0 | 0.875 | 0.125 | 0.0 | |
right palm | 0.0 | 0.0 | 1.0 | 0.0 | |
tongue | 0.0 | 0.0 | 0.0 | 1.0 | |
Overall Accuracy | 0.970588 | ||||
Baseline Accuracy | 0.264706 | ||||
Accuracy Ratio | 3.666667 |
Download accuracy results as tsv
Download as PDF
Receiver Operating Characteristic (ROC) curves are a graphical representation of the classification accuracy of a machine-learning model. The ROC curve plots the relationship between the true positive rate (TPR, on the y-axis) and the false positive rate (FPR, on the x-axis) at various threshold settings. Thus, the top-left corner of the plot represents the "optimal" performance position, indicating a FPR of zero and a TPR of one. This "optimal" scenario is unlikely to occur in practice, but a greater area under the curve (AUC) indicates better performance. This can be compared to the error rate achieved by random chance, which is represented here as a diagonal line extending from the lower-left to upper-right corners. Additionally, the "steepness" of the curve is important, as a good classifier should maximize the TPR while minimizing the FPR. In addition to showing the ROC curves for each class, average ROCs and AUCs are calculated. "Micro-averaging" calculates metrics globally by averaging across each sample; hence class imbalance impacts this metric. "Macro-averaging" is another average metric, which gives equal weight to the classification of each sample.