Skip to main content
ForensicShield(go to home page)

The Daubert self-audit checklist for forensic evaluation reports.

A pre-finalization audit you can run on any forensic report before you sign it. Calibrated to Daubert (1993), Joiner (1997), Kumho Tire (1999), and the December 2023 amendment to Federal Rule of Evidence 702. Use it as a final pass on your own work or as a benchmark when reviewing colleagues’ reports.

What Daubert actually asks of forensic testimony.

Federal Rule of Evidence 702 governs the admissibility of expert testimony. The Daubert trilogy — Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993); General Electric Co. v. Joiner, 522 U.S. 136 (1997); and Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999) — established the federal standard and extended it to all expert testimony, including forensic mental health.

The December 1, 2023 amendment to FRE 702 sharpened the burden: the proponent of expert testimony must demonstrate to the court that it is “more likely than not” that the testimony meets each admissibility requirement. The amendment was driven by years of trial-court drift in which reliability questions were treated as weight-of-evidence issues for the jury. After the amendment, those questions must be resolved by the court before the jury hears anything.

For forensic psychology specifically, the practical implication is that methodology, instrument selection, application, and the chain of inference from data to opinion are all under tighter scrutiny than they were before. A report that would have survived a 2015 admissibility challenge may not survive a 2026 one.

Six sections. Forty-two questions.

Run each section against your draft. Any item that you can’t answer affirmatively is a place where the report is exposed under cross or under a Daubert motion. The goal is not perfection — it is awareness, before you sign.

1. Methodology — the testability and peer-review prong

Daubert factor: whether the theory or technique can be (and has been) tested; whether it has been subjected to peer-review and publication; the known or potential rate of error; general acceptance.

  • Is the methodology I used published in the peer-reviewed forensic literature?
  • Have I cited the literature that establishes the methodology, not just authored my own reasoning?
  • Have I addressed the known error rate for the method or the instrument?
  • Have I documented why this method was appropriate for this evaluation type and this referral question?
  • Is the methodology generally accepted in forensic psychology — or, if it is novel, have I justified the choice with reference to validation studies?
  • Have I avoided combining methods in ways the literature does not support?
  • If I deviated from a standard protocol, is the deviation documented and justified?

2. Instruments — the right tool for the question

Forensic-specific consideration: clinical-use validation does not transfer automatically to forensic-use validation.

  • Is each instrument validated for the population and the forensic context I’m using it in?
  • Have I used current published norms appropriate to the examinee’s demographics?
  • Did I administer the instrument under standardized conditions, and is any deviation documented?
  • Have I included response-style assessment / SVTs where the forensic context calls for them?
  • Have I avoided instruments with known forensic admissibility problems unless I’ve specifically justified their inclusion?
  • Does the report show the integration between psychometric data and the clinical reasoning — rather than treating the score as the opinion?

3. Application — sound application to this case

FRE 702(d) (post-2023): the expert must reliably apply the principles and methods to the facts of the case.

  • Have I shown the inferential chain from data to opinion rather than implying it?
  • Have I tested alternative hypotheses and documented their consideration?
  • Have I addressed each allegation, claim, or referral question with method specifically suited to it — rather than relying on a global clinical impression?
  • Where the analysis depends on collateral data, have I triangulated rather than anchored on self-report?
  • Have I considered base rates where they affect the interpretation of the data?
  • Have I addressed the cultural, linguistic, and socioeconomic context of the examinee?
  • Have I distinguished between observed behavior and inferred internal states, with appropriate hedging?

4. Legal standard alignment — speaking to the right question

The opinion must answer the legal question the court is asking, in the language the controlling jurisdiction uses.

  • Have I identified the controlling legal standard for the jurisdiction (Daubert, Frye, hybrid)?
  • Does the analysis map to the specific elements of the applicable test?
  • Have I avoided opining on the ultimate legal question where FRE 704 or its state analog prohibits doing so?
  • Have I distinguished between clinical observation and forensic conclusion?
  • Have I scoped the opinion to the referral question rather than expanding into territory I wasn’t retained to cover?

5. Report craft — how it reads on the stand

Cross-examination rarely attacks the science. It attacks the language.

  • Have I used probabilistic language where the data warrant probabilistic conclusions, rather than definitive statements?
  • Have I avoided ipse dixit — conclusions presented without supporting data?
  • Are clinical terms defined or footnoted for a non-clinical reader?
  • Have I documented the limitations of the evaluation (timing, scope, available data, inferential reach)?
  • Have I avoided pejorative language and framing that reads as advocacy rather than analysis?
  • Is every factual assertion in the report sourced — record, interview, observation, instrument?

6. Role and ethics — the threshold issues

Reports get excluded on threshold issues before the substance is reached.

  • Was informed-consent / notice-of-purpose given and documented (per APA Specialty Guidelines for Forensic Psychology)?
  • Have I avoided multi-role conflicts (treating clinician, evaluator, mediator) within the same case?
  • Are my qualifications, scope of practice, and any limitations on competence accurate as represented in the report?
  • Have I addressed any potential conflicts of interest?
  • If I’m using AI tools, structured templates, or consulting resources, are they disclosed at the level the jurisdiction expects?
  • Are records of the evaluation maintained per applicable ethics, retention, and HIPAA / state law standards?

This is what ForensicShield runs on every report.

The checklist above is what an experienced forensic psychologist would walk through before signing a report. The same logic underlies ForensicShield’s defensibility review — a structured pre-finalization analysis calibrated to the published forensic literature, FRE 702 (as amended), and the cross-examination patterns that have produced exclusions in published cases. The output is a Court Preparation Packet that flags items the report should address before you sign it.

The forensic psychologist remains the author, the expert, and the signatory on every report. The AI doesn’t generate opinions, doesn’t alter conclusions, and doesn’t replace clinical judgment. It runs the same kind of audit this checklist describes — faster, more thoroughly, and on every report rather than on the ones you have time to scrutinize.

See it applied to specific evaluation types: competency to stand trial, child custody, and criminal responsibility.

Authority behind the checklist.

  • Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) — the foundational reliability factors.
  • General Electric Co. v. Joiner, 522 U.S. 136 (1997) — the trial court’s gatekeeping role and abuse-of-discretion review.
  • Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999) — extended Daubert to all expert testimony, including non-scientific.
  • Federal Rule of Evidence 702, as amended December 1, 2023 — clarified the proponent’s burden and tightened the reliability gate.
  • APA Specialty Guidelines for Forensic Psychology (2013) — the discipline’s ethical and methodological expectations.

This checklist is informational and does not constitute legal advice. State law and circuit law vary. For Frye-state evaluations, the “general acceptance” prong controls in place of the broader Daubert reliability inquiry.

Run this audit on every report, automatically.

ForensicShield runs the same kind of defensibility review described above on any forensic report you upload. Your first analysis is free during the 14-day trial.

Start Free Trial →

14-day free trial · 2 reports included (1 sample + 1 of your own) · A payment method is collected for identity verification — your card will not be automatically charged when the trial ends · HIPAA compliant