Group Comparison Workflow

How to run a complete group comparison in Licklider, from uploading data to exporting a publication-ready result.

A group comparison asks whether two or more groups differ on some measured outcome. This is among the most common analyses in life sciences research, and Licklider supports it end-to-end: from automatic test selection based on the data to quality checks, disclosure, and export.

This page describes the typical steps involved in a group comparison from start to finish.

This workflow produces concrete outputs: a group comparison figure, the selected statistical test, the p-value and effect-size reporting that belong to that test, the linked quality checks, and the export gates needed for claim-bearing use.


Step 1: Prepare your data

Your dataset should have at minimum:

  • A group column — a categorical variable that identifies which group each observation belongs to
  • A value column — the numeric outcome you want to compare

Each row should represent one observation. If your data is in a wide format (one column per group), reshape it to long format before analysis. For help with reshaping → see Table Shape: Wide vs Long.


Step 2: Request the comparison

Describe what you want to compare in the Chat:

  • "Compare treatment and control groups"
  • "Is there a difference between the three conditions?"
  • "Show a box plot and test whether the groups differ"

Licklider will identify the group and value columns, select a visualization, and choose the appropriate statistical test.

That automation is a starting point, not a guarantee that every design fact has been inferred correctly. Licklider can read the table you upload, but it cannot determine automatically whether the declared group column is the scientifically intended one, whether the observation unit has been encoded at the right level, or whether hidden pairing, nesting, or batch structure exists only outside the table.

Those limits matter because if the group column, value column, or observational unit is wrong, the selected test can look technically plausible while answering the wrong experimental question.


Step 3: Test selection

Licklider selects the statistical test automatically based on the data:

Two groups

  • Normality is checked using the Shapiro-Wilk test
  • If both groups pass: Welch's t-test
  • If either group fails: Mann-Whitney U test
  • If the design is paired: paired t-test or Wilcoxon signed-rank test

Three or more groups

  • If all groups pass normality: one-way ANOVA followed by post hoc comparisons
  • If any group fails: Kruskal-Wallis test

This default path is meant to keep the first result aligned with the broad structure of the data rather than forcing every user to choose a test from scratch. Parametric tests are used when the data looks compatible with their assumptions, and distribution-free alternatives are used when the normality checks suggest that mean-based assumptions may be harder to justify.

The workflow also prefers Welch's t-test for independent two-group comparisons because it remains usable when group variances differ, making it a safer default than assuming equal variances by default.

If you want to use a specific test, request it directly in the Chat. For more detail → see Choose the Right Test.

If the dataset is missing IDs, uses the wrong ID, collapses technical replicates into apparently independent rows, or leaves pairing structure undeclared, Licklider may still choose the wrong comparison path. The automatic suggestion should be reviewed against the actual study design, not trusted as a substitute for it.


Step 4: Review the assumptions

After the test runs, Licklider evaluates whether the assumptions of the selected test are met. Results appear in the Stats panel of the Inspector.

Key checks that run automatically:

  • Normality — Shapiro-Wilk per group
  • Equal variance — Levene's test for ANOVA
  • Pairing — whether subject IDs are consistent across conditions
  • Pseudoreplication — whether the same biological unit appears more than once

If any check finds a problem, the Inspector will show it as a requirement to address. For more detail → see Assumption and Robustness Guard.

These checks are there because a group comparison can look clean while still being scientifically fragile. The workflow surfaces assumption failures and design risks before export so that they remain visible in the figure record instead of disappearing behind a single p-value.


Step 5: Review the figure

The figure appears in the canvas. The default visualization depends on the sample size:

  • Small samples (8 or fewer per group): strip plot
  • Medium samples: box plot with individual points
  • Large samples: violin plot or box plot

This figure switching is intentional. Smaller samples benefit from showing individual points directly, while larger samples benefit from summaries that keep the display readable without hiding the data shape entirely.

To change the figure type, ask in the Chat: "Show a violin plot instead."


Step 6: Resolve disclosures

Before a confirmatory result can be exported, Licklider confirms several disclosure items:

  • Error bar type — if the figure has error bars, confirm whether they show SD, SEM, or 95% CI
  • Effect size and CI — confirm that the effect size and confidence interval are reported
  • Sample size — confirm the analysis N

For confirmatory analyses, unresolved disclosures block export. Resolve them in the Inspector.

That export boundary exists because a group comparison is not fully reported by the test name alone. Error-bar meaning, effect size, confidence intervals, and analysis N all affect what the figure actually claims, so Licklider keeps those items explicit before a confirmatory result is exported.


Step 7: Export

Once all disclosures are resolved, the figure can be exported for claim-bearing use. The export includes the figure, the statistical results, the methods text draft, and all required disclosures.


Common variations

Paired design

If your data is paired — for example, measurements before and after treatment on the same subjects — tell Licklider: "This is a paired comparison." Licklider will confirm the pairing structure and use a paired test automatically.

Multiple groups with post hoc

For three or more groups, post hoc pairwise comparisons run automatically after a significant ANOVA result. The correction method (Tukey HSD, Holm, Bonferroni) can be set in the Chat.

Non-parametric design

If you prefer a non-parametric test regardless of normality: "Use a non-parametric test."


Design rationale and references

This workflow is designed to make the first pass useful without pretending that workflow automation can replace study-design judgment. That is why Licklider proposes a test, runs the linked checks, surfaces caveats and disclosures, and still leaves room for the user to redirect the method when the analysis plan requires it.

The same reasoning explains why unresolved design and reporting issues can block confirmatory export. The workflow is trying to keep group comparisons both convenient and auditable, not to make them look more certain than the encoded design supports.

References

  1. Shapiro, S. S., & Wilk, M. B. (1965). An analysis of variance test for normality (complete samples). Biometrika, 52(3-4), 591-611. https://doi.org/10.2307/2333709
  2. Welch, B. L. (1947). The generalization of Student's problem when several different population variances are involved. Biometrika, 34(1-2), 28-35. https://doi.org/10.2307/2332510
  3. Hurlbert, S. H. (1984). Pseudoreplication and the design of ecological field experiments. Ecological Monographs, 54(2), 187-211. https://doi.org/10.2307/1942661

These references support key parts of the workflow: normality checking, a variance-robust two-group default, and explicit guarding against pseudoreplication. In Licklider, those ideas are used to structure the workflow, but they do not eliminate the need to verify that the uploaded table matches the real experimental design.


What this page does not cover