Download as PDF
susceptible and Healthy | susceptible and PD | wild type and Healthy | wild type and PD | Overall Accuracy | |
---|---|---|---|---|---|
susceptible and Healthy | 1.0 | 0.0 | 0.0 | 0.0 | |
susceptible and PD | 0.0 | 1.0 | 0.0 | 0.0 | |
wild type and Healthy | 0.333333 | 0.0 | 0.666667 | 0.0 | |
wild type and PD | 0.0 | 0.0 | 0.0 | 1.0 | |
Overall Accuracy | 0.9 | ||||
Baseline Accuracy | 0.3 | ||||
Accuracy Ratio | 3.0 |
Download accuracy results as tsv
Download as PDF
Receiver Operating Characteristic (ROC) curves are a graphical representation of the classification accuracy of a machine-learning model. The ROC curve plots the relationship between the true positive rate (TPR, on the y-axis) and the false positive rate (FPR, on the x-axis) at various threshold settings. Thus, the top-left corner of the plot represents the "optimal" performance position, indicating a FPR of zero and a TPR of one. This "optimal" scenario is unlikely to occur in practice, but a greater area under the curve (AUC) indicates better performance. This can be compared to the error rate achieved by random chance, which is represented here as a diagonal line extending from the lower-left to upper-right corners. Additionally, the "steepness" of the curve is important, as a good classifier should maximize the TPR while minimizing the FPR. In addition to showing the ROC curves for each class, average ROCs and AUCs are calculated. "Micro-averaging" calculates metrics globally by averaging across each sample; hence class imbalance impacts this metric. "Macro-averaging" is another average metric, which gives equal weight to the classification of each sample.