ROC Curve & AUC

Plot a Receiver Operating Characteristic (ROC) curve from classifier scores and binary labels. The x-axis is sensitivity (true positive rate) and the y-axis is specificity (true negative rate). The AUC is computed via the trapezoidal rule and equals the standard ROC AUC.

Interactive example

Hover the intersection points to see the target specificity, the actual specificity achieved, the corresponding sensitivity, and the score cutoff in the data.

import polars as pl
from plotutils.auc import plot_roc_curve

df = pl.DataFrame({
    "score": [...],   # classifier probability / score (higher → more likely positive)
    "label": [...],   # ground-truth binary label (0 = negative, 1 = positive)
})

chart = plot_roc_curve(
    df,
    score_col="score",
    label_col="label",
    specificity_levels=[0.95, 0.90, 0.80],
)

chart.save("roc.html")   # interactive; or .show() in a notebook

Specificity levels

specificity_levels annotates the curve at one or more target specificity values. For each target a dashed cross-hair is drawn: a horizontal line from the y-axis to the curve, then a vertical line down to the x-axis.

The actual specificity shown is always ≥ the requested level (conservative selection): the closest curve point at or above the target is chosen. When multiple points tie, the one with the highest sensitivity is preferred.

chart = plot_roc_curve(df, specificity_levels=[0.95, 0.90, 0.80])

Hovering an intersection point shows:

Tooltip field Meaning
Target specificity The value you passed in
Actual specificity The specificity of the closest curve point
Sensitivity Corresponding true positive rate
Cutoff Score threshold in your data

Computing the curve and AUC directly

The two compute helpers are available separately if you need the raw numbers:

from plotutils.auc import _compute_roc, _compute_auc

roc_df = _compute_roc(df, score_col="score", label_col="label")
# ┌───────────┬─────────────┬─────────────┐
# │ threshold ┆ sensitivity ┆ specificity │
# │ f64       ┆ f64         ┆ f64         │
# ╞═══════════╪═════════════╪═════════════╡
# │ null      ┆ 0.0         ┆ 1.0         │  ← start boundary
# │ 0.97      ┆ 0.04        ┆ 1.0         │
# │ …         ┆ …           ┆ …           │
# │ null      ┆ 1.0         ┆ 0.0         │  ← end boundary
# └───────────┴─────────────┴─────────────┘

auc = _compute_auc(roc_df)   # e.g. 0.823

_compute_roc is fully vectorised in Polars: scores are grouped by unique value (ties handled correctly), sorted descending, and cumulative TP / FP counts are derived with cum_sum — no Python loop over thresholds.

Reference

plotutils.auc.plot_roc_curve(df, score_col='score', label_col='label', specificity_levels=None, title='', width=400, height=400, curve_color='steelblue', id_col=None, **kwargs)

Plot a ROC curve with sensitivity on the x-axis and specificity on the y-axis.

The area under this curve is mathematically equal to the standard AUC (area under the FPR / TPR ROC curve).

Parameters:

Name Type Description Default
df DataFrame

DataFrame containing a score column and a binary label column.

required
score_col str

Column with classifier scores (higher → more likely positive).

'score'
label_col str

Column with binary ground-truth labels (0 = negative, 1 = positive).

'label'
specificity_levels list[float] or None

Target specificity values to annotate on the curve. For each level a dashed horizontal line is drawn from the y-axis to the curve, then a dashed vertical line goes down to the x-axis. The intersection point shows a tooltip with the closest threshold (cutoff) in the data.

None
title str

Chart title. When empty, defaults to "ROC curve (AUC = …)".

''
width int

Chart dimensions in pixels.

400
height int

Chart dimensions in pixels.

400
curve_color str

CSS color for the ROC curve.

'steelblue'
id_col str or None

Optional column name containing patient / sample identifiers. When provided, hovering over any threshold step on the curve reveals the ID(s) of the patient(s) whose score equals that cutoff (ties are shown as a comma-separated list).

None
**kwargs

Additional keyword arguments are passed to _compute_roc (e.g. reverse_score=True if lower scores indicate more likely positive).

{}

Returns:

Type Description
LayerChart
Source code in src/plotutils/auc.py
def plot_roc_curve(
    df: pl.DataFrame,
    score_col: str = "score",
    label_col: str = "label",
    specificity_levels: list[float] | None = None,
    title: str = "",
    width: int = 400,
    height: int = 400,
    curve_color: str = "steelblue",
    id_col: str | None = None,
    **kwargs,
) -> alt.LayerChart:
    """Plot a ROC curve with sensitivity on the x-axis and specificity on the y-axis.

    The area under this curve is mathematically equal to the standard AUC
    (area under the FPR / TPR ROC curve).

    Parameters
    ----------
    df : pl.DataFrame
        DataFrame containing a score column and a binary label column.
    score_col : str
        Column with classifier scores (higher → more likely positive).
    label_col : str
        Column with binary ground-truth labels (0 = negative, 1 = positive).
    specificity_levels : list[float] or None
        Target specificity values to annotate on the curve.  For each level a
        dashed horizontal line is drawn from the y-axis to the curve, then a
        dashed vertical line goes down to the x-axis.  The intersection point
        shows a tooltip with the closest threshold (cutoff) in the data.
    title : str
        Chart title.  When empty, defaults to ``"ROC curve  (AUC = …)"``.
    width, height : int
        Chart dimensions in pixels.
    curve_color : str
        CSS color for the ROC curve.
    id_col : str or None
        Optional column name containing patient / sample identifiers.  When
        provided, hovering over any threshold step on the curve reveals the
        ID(s) of the patient(s) whose score equals that cutoff (ties are shown
        as a comma-separated list).
    **kwargs
        Additional keyword arguments are passed to `_compute_roc` (e.g. `reverse_score=True` if lower scores indicate more likely positive).

    Returns
    -------
    alt.LayerChart
    """
    alt.data_transformers.disable_max_rows()

    roc_df = _compute_roc(df, score_col, label_col, **kwargs)
    auc = _compute_auc(roc_df)
    chart_title = alt.Title(title, subtitle=f"ROC curve  (AUC = {auc:.3f})")

    # --- Reference diagonal (random classifier) -----------------------
    diag = (
        alt.Chart(pl.DataFrame({"x": [0.0, 1.0], "y": [1.0, 0.0]}))
        .mark_line(color="#444", strokeWidth=0.75, opacity=0.4)
        .encode(x="x:Q", y="y:Q")
    )

    # --- Main ROC curve -----------------------------------------------
    # Drop the threshold column: boundary points have null threshold and
    # the curve only needs sensitivity/specificity for its x/y encoding.
    curve = (
        alt.Chart(roc_df.select(["sensitivity", "specificity"]))
        .mark_line(color=curve_color)
        .encode(
            x=alt.X(
                "sensitivity:Q", title="Sensitivity", scale=alt.Scale(domain=[0, 1])
            ),
            y=alt.Y(
                "specificity:Q", title="Specificity", scale=alt.Scale(domain=[0, 1])
            ),
            tooltip=[
                alt.Tooltip("sensitivity:Q", format=".3f"),
                alt.Tooltip("specificity:Q", format=".3f"),
            ],
        )
    )

    layers: list[alt.Chart | alt.LayerChart] = [diag, curve]

    # --- Per-threshold patient ID tooltip (optional) ------------------
    if id_col is not None:
        # For each unique score value (= threshold step), collect the IDs of
        # all patients at that score.  Ties produce a comma-separated list.
        id_by_threshold = (
            df.select([score_col, id_col])
            .with_columns(pl.col(score_col).cast(pl.Float64).alias("threshold"))
            .group_by("threshold")
            .agg(pl.col(id_col).cast(pl.Utf8).sort().str.join(", ").alias("_ids"))
        )
        roc_interactive = (
            roc_df.filter(pl.col("threshold").is_not_null())
            .join(id_by_threshold, on="threshold", how="left")
            .select(["sensitivity", "specificity", "threshold", "_ids"])
        )
        id_points = (
            alt.Chart(roc_interactive)
            .mark_point(opacity=0, size=300, filled=True)
            .encode(
                x=alt.X("sensitivity:Q", scale=alt.Scale(domain=[0, 1])),
                y=alt.Y("specificity:Q", scale=alt.Scale(domain=[0, 1])),
                tooltip=[
                    alt.Tooltip("sensitivity:Q", format=".3f"),
                    alt.Tooltip("specificity:Q", format=".3f"),
                    alt.Tooltip("threshold:Q", format=".4g", title="Cutoff"),
                    alt.Tooltip("_ids:N", title="Patient ID(s)"),
                ],
            )
        )
        layers.append(id_points)

    # --- Specificity-level markers ------------------------------------
    if specificity_levels:
        roc_with_thresh = roc_df.filter(pl.col("threshold").is_not_null())

        # Resolve each target level to the closest curve point with
        # specificity >= spec_level (never below the target).
        # Among equally-close points, prefer the highest sensitivity.
        level_data: list[tuple[float, float, float, float]] = []
        for spec_level in specificity_levels:
            candidates = roc_with_thresh.filter(pl.col("specificity") >= spec_level)
            if candidates.is_empty():
                max_spec = float(roc_with_thresh["specificity"].max())  # type: ignore[arg-type]
                raise ValueError(
                    f"No curve point achieves specificity ≥ {spec_level:.3f}. "
                    f"Maximum specificity in data is {max_spec:.3f}."
                )
            row = (
                candidates.with_columns(
                    (pl.col("specificity") - spec_level).alias("_diff")
                )
                .sort(["_diff", "sensitivity"], descending=[False, True])
                .row(0, named=True)
            )
            level_data.append(
                (spec_level, row["sensitivity"], row["specificity"], row["threshold"])
            )

        # Pre-compute pAUC for each level: range [spec_level, 1.0] with McClish.
        pauc_by_target: dict[str, float] = {
            f"{sl:.2f}": _compute_pauc(
                roc_df, sl, 1.0, focus="specificity", mcclish=True
            )
            for sl, *_ in level_data
        }

        # Line segments: 2 rows per segment, grouped by seg_id via detail encoding.
        # All y-coordinates use actual_spec so that lines and marker are consistent.
        # The legend label shows actual_spec only; pAUC is in the tooltip.
        seg_rows: list[dict] = []
        pt_rows: list[dict] = []
        for i, (spec_level, sens, actual_spec, threshold) in enumerate(level_data):
            target = f"{spec_level:.2f}"
            label = f"{actual_spec:.3f}"
            pauc_val = pauc_by_target[target]
            seg_rows += [
                # Horizontal: (0, actual_spec) → (sens, actual_spec)
                {
                    "seg": f"h{i}",
                    "x": 0.0,
                    "y": actual_spec,
                    "level": label,
                    "target": target,
                    "threshold": threshold,
                },
                {
                    "seg": f"h{i}",
                    "x": sens,
                    "y": actual_spec,
                    "level": label,
                    "target": target,
                    "threshold": threshold,
                },
                # Vertical: (sens, actual_spec) → (sens, 0)
                {
                    "seg": f"v{i}",
                    "x": sens,
                    "y": actual_spec,
                    "level": label,
                    "target": target,
                    "threshold": threshold,
                },
                {
                    "seg": f"v{i}",
                    "x": sens,
                    "y": 0.0,
                    "level": label,
                    "target": target,
                    "threshold": threshold,
                },
            ]
            pt_rows.append(
                {
                    "sensitivity": sens,
                    "specificity": actual_spec,
                    "level": label,
                    "target": target,
                    "threshold": threshold,
                    "pauc": pauc_val,
                }
            )

        seg_df = pl.DataFrame(seg_rows)
        pt_df = pl.DataFrame(pt_rows)

        # Build a deterministic green→grey colour scale.
        # Sort by spec_level descending so the highest level gets green (t=0)
        # and the lowest gets grey (t=1).
        sorted_levels = sorted(level_data, key=lambda x: x[0], reverse=True)
        n_lvl = len(sorted_levels)
        color_domain: list[str] = []
        color_range: list[str] = []
        for idx, (_, _, actual_spec, _) in enumerate(sorted_levels):
            t = idx / max(n_lvl - 1, 1)
            color_domain.append(f"{actual_spec:.3f}")
            color_range.append(_lerp_hex(t))
        color_scale = alt.Scale(domain=color_domain, range=color_range)

        # Shade the "high-specificity" region: x∈[0,1], y∈[min_level, 1].
        # Use the green end of the gradient for a subtle filled band.
        shade_y_lo = min(spec_level for spec_level, *_ in level_data)
        shade_df = pl.DataFrame(
            {"x1": [0.0], "x2": [1.0], "y1": [shade_y_lo], "y2": [1.0]}
        )
        shade = (
            alt.Chart(shade_df)
            .mark_rect(color=color_range[0], opacity=0.08)
            .encode(
                x=alt.X("x1:Q", scale=alt.Scale(domain=[0, 1])),
                x2="x2:Q",
                y=alt.Y("y1:Q", scale=alt.Scale(domain=[0, 1])),
                y2="y2:Q",
            )
        )
        # Insert shade before the diagonal so it sits in the background.
        layers.insert(0, shade)

        level_lines = (
            alt.Chart(seg_df)
            .mark_line(strokeDash=[5, 3])
            .encode(
                x=alt.X("x:Q", scale=alt.Scale(domain=[0, 1])),
                y=alt.Y("y:Q", scale=alt.Scale(domain=[0, 1])),
                color=alt.Color("level:N", scale=color_scale, title="Specificity"),
                detail="seg:N",
            )
        )

        level_pts = (
            alt.Chart(pt_df)
            .mark_point(size=80, filled=True)
            .encode(
                x=alt.X("sensitivity:Q", scale=alt.Scale(domain=[0, 1])),
                y=alt.Y("specificity:Q", scale=alt.Scale(domain=[0, 1])),
                color=alt.Color("level:N", scale=color_scale, title="Specificity"),
                tooltip=[
                    alt.Tooltip("target:N", title="Target specificity"),
                    alt.Tooltip("level:N", title="Actual specificity"),
                    alt.Tooltip("sensitivity:Q", format=".3f"),
                    alt.Tooltip("threshold:Q", title="Cutoff", format=".4g"),
                    alt.Tooltip("pauc:Q", title="pAUC (McClish)", format=".3f"),
                ],
            )
        )

        layers += [level_lines, level_pts]

    return (
        alt.layer(*layers)
        .properties(title=chart_title, width=width, height=height)
        .configure_axis(
            gridColor="#444", gridWidth=0.75, gridDash=[3, 3], gridOpacity=0.4
        )
        .configure_view(strokeWidth=0)
    )

plotutils.auc._compute_roc(df, score_col, label_col, reverse_score=False)

AUROC computation with polars backend. Returns a DataFrame with columns (threshold, sensitivity, specificity).

Includes boundary points at (sensitivity=0, specificity=1) and (sensitivity=1, specificity=0) with null threshold.

Ties (multiple samples sharing the same score) are handled correctly: all tied samples are grouped into one threshold step before the cumulative TP/FP counts are computed.

The label_col can be any binary column: integer 0/1, boolean, or a string / categorical column with exactly two distinct values (the lexicographically larger value is treated as the positive class).

Source code in src/plotutils/auc.py
def _compute_roc(
    df: pl.DataFrame,
    score_col: str,
    label_col: str,
    reverse_score: bool = False,
) -> pl.DataFrame:
    """AUROC computation with `polars` backend. Returns a DataFrame with columns (threshold, sensitivity, specificity).

    Includes boundary points at (sensitivity=0, specificity=1) and
    (sensitivity=1, specificity=0) with null threshold.

    Ties (multiple samples sharing the same score) are handled correctly:
    all tied samples are grouped into one threshold step before the cumulative
    TP/FP counts are computed.

    The *label_col* can be any binary column: integer 0/1, boolean, or a
    string / categorical column with exactly two distinct values (the
    lexicographically larger value is treated as the positive class).
    """
    # Normalise label to 0/1 Int8 so downstream comparisons always work.
    label_col_int = "__label__"
    df = df.with_columns(_coerce_label(df[label_col]).alias(label_col_int))

    n_pos = int((df[label_col_int] == 1).sum())
    n_neg = int((df[label_col_int] == 0).sum())

    if n_pos == 0 or n_neg == 0:
        raise ValueError("Both classes (0 and 1) must be present in label_col.")

    curve = (
        df.select([score_col, label_col_int])
        .group_by(score_col)
        .agg(
            (pl.col(label_col_int) == 1).sum().alias("tp_step"),
            (pl.col(label_col_int) == 0).sum().alias("fp_step"),
        )
        .sort(score_col, descending=not reverse_score)
        .with_columns(
            pl.col("tp_step").cum_sum().alias("tp"),
            pl.col("fp_step").cum_sum().alias("fp"),
        )
        .select(
            pl.col(score_col).cast(pl.Float64).alias("threshold"),
            (pl.col("tp") / n_pos).alias("sensitivity"),
            ((n_neg - pl.col("fp")) / n_neg).alias("specificity"),
        )
    )

    boundary = pl.DataFrame(
        {
            "threshold": pl.Series([None, None], dtype=pl.Float64),
            "sensitivity": [0.0, 1.0],
            "specificity": [1.0, 0.0],
        }
    )

    return pl.concat([boundary.head(1), curve, boundary.tail(1)])

plotutils.auc._compute_auc(roc_df)

Trapezoidal AUC under the specificity-sensitivity curve.

Equivalent to the standard AUC of the ROC (see note in the docstring of :func:plot_roc_curve).

Source code in src/plotutils/auc.py
def _compute_auc(roc_df: pl.DataFrame) -> float:
    """Trapezoidal AUC under the specificity-sensitivity curve.

    Equivalent to the standard AUC of the ROC (see note in the docstring
    of :func:`plot_roc_curve`).
    """
    return float(
        roc_df.sort("sensitivity")
        .with_columns(
            [
                pl.col("sensitivity").diff().alias("dx"),
                ((pl.col("specificity") + pl.col("specificity").shift(1)) / 2).alias(
                    "avg_spec"
                ),
            ]
        )
        .filter(pl.col("dx").is_not_null())
        .select((pl.col("dx") * pl.col("avg_spec")).sum())
        .item()
    )