ROC Curve

What a ROC curve shows, how AUC is reported in Licklider, and how to interpret the figure.

A ROC (receiver operating characteristic) curve visualizes the trade-off between sensitivity and specificity across all possible classification thresholds for a binary outcome. It is the standard way to evaluate how well a logistic regression model or other binary classifier discriminates between the two outcome classes.

What the figure shows

The ROC curve

The x-axis shows the false positive rate (1 − specificity): the proportion of negative cases that are incorrectly classified as positive at a given threshold. The y-axis shows the true positive rate (sensitivity): the proportion of positive cases correctly identified.

Each point on the curve corresponds to a classification threshold. Moving along the curve from bottom-left to top-right corresponds to lowering the threshold — classifying more cases as positive, which increases both sensitivity and false positive rate.

The chance line

A diagonal dashed line from (0, 0) to (1, 1) represents a classifier with no discriminating ability — equivalent to random guessing. A useful classifier should produce a curve that bows toward the top-left corner.

AUC

The area under the ROC curve (AUC) is shown in the figure legend. AUC = 0.5 indicates no better than chance; AUC = 1.0 indicates perfect discrimination. The AUC is reported as a point estimate.

How to read the AUC

AUCInterpretation
0.5No discrimination — equivalent to chance
0.6-0.7Poor discrimination
0.7-0.8Acceptable discrimination
0.8-0.9Excellent discrimination
> 0.9Outstanding discrimination

These thresholds are guidelines. The appropriate AUC target depends on the clinical or scientific context and the consequences of false positives versus false negatives.

How it is generated

The ROC curve is generated automatically when logistic regression is run on a binary outcome. It does not require a separate request — the figure appears alongside the coefficient table and model statistics.

To generate it explicitly:

  • "Show a ROC curve"
  • "Plot sensitivity vs specificity"
  • "Run logistic regression and show the ROC curve"

What is not shown

The figure does not indicate the optimal classification threshold. Threshold selection depends on the relative cost of false positives and false negatives in your specific application and is outside the scope of the figure.

Confidence intervals for AUC are not currently shown. The AUC is reported as a single value.

What this page does not cover