Reachability Labs measuring reachable future under commitment
Diagnostics

Diagnostic engagements.

Reachability Labs accepts a small number of diagnostic engagements per quarter. Read the fit criteria. If your situation matches, submit the intake.

Engagement gate.

What we accept. What we decline.

Read these before submitting the intake.

We accept engagements where

  • You have a real process producing decisions or outcomes step by step — planner, solver, workflow, reasoning system, forward-construction pipeline.
  • It's failing in a way ordinary metrics don't explain.
  • A senior person on your side owns the question.
  • You can share real operational detail — traces, examples, representative cases.
  • The work yields a written measurement deliverable.

We decline engagements where

  • You want a finished software product. We do diagnostics, not builds.
  • No senior person owns the question.
  • You can't share enough detail to do real measurement.
  • The real problem is organizational, strategic, or vendor selection.
  • You want a validation memo for a decision already made.
  • The work has no measurement yield for the broader program.
Different lane?

Research collaboration, domain adapters, or technical instrumentation work go through the collaboration lane. The evidence hub has the receipts if you want to verify the research first.

Need the research proof before you scope a diagnostic?

Use the evidence hub if you need to verify the flagship benchmark, graph-coloring transfer, public archive, or artifact ledger before sharing a process trace.

A decision-grade diagnostic, not a generic postmortem.

The point is to identify where your process loses the route, what kind of trap dominates, and which changes are most likely to matter.

01

Boundary and regime

Where the process starts to lose reachable future, and whether the failure mode is shallow, deep, front-loaded, late, or mixed.

02

Failure fingerprint

The pattern of collapse: hazard concentration, trap geometry, receipt behavior, and what the surface metrics are hiding.

03

Variant comparison

If stronger variants exist, the work separates what they actually buy from what only looks better on aggregate scores.

04

Next-step recommendation

A concrete decision path: where to instrument next, what to stop assuming, and which upgrades are worth testing first.

Four documents. Four jobs.

Each document answers a different question: what you get, what the output looks like, and how to start.

Best first read

What You Receive

The cleanest overview of the engagement. Read this first if you need the shortest answer to what you get and why it is useful.

Use this when you want a fast decision on whether the service matches the problem you are dealing with.
Proof artifact

Illustrative Findings Memo

A sample of the actual output. This is the best document for seeing how the analysis is framed, what counts as evidence, and how recommendations are delivered.

Use this when you want to see the format and standard of the work rather than just a service description.
Shareable overview

Pilot One-Pager

A one-page summary for internal circulation. Useful when you need to hand the idea to another decision-maker without sending the full packet.

Use this when you need a concise document for a partner, sponsor, manager, or collaborator.
Start here

Structured intake form

Web-based. The questions cover process, failure pattern, constraints, and success conditions. Submitting it is how we evaluate fit.

Use after reading the fit criteria.

Simple sequence. No mystery.

The path is structured so you can see what is happening and why each step exists.

Step 1

Intake

You provide the process, the success condition, the operating constraints, and the failure symptoms that matter.

Step 2

Review and instrumentation

The process is framed as a constructive system so the right diagnostics can be applied instead of generic benchmarking.

Step 3

Findings

You receive a receipt-backed findings memo showing where the route closes, how the failure manifests, and what is actually structural.

Step 4

Decision

The output supports a concrete next move: instrument further, change the process, compare variants, or define a pilot software path.

Define the target. Instrument. Compare. Diagnose.

How a diagnostic works.

Beneath the customer workflow above, the measurement method is four moves. Each step has a specific diagnostic purpose and a specific kind of output.

Step 1
Define success
What counts as a valid outcome for this process?
Step 2
Instrument
Capture trajectories, decisions, and failure receipts
Step 3
Compare
Baseline vs stronger process, or multiple process families
Step 4
Diagnose
Identify where the path closes and what kind of trap dominates
Convergent across variants

Landscape-side · structural

Results that give the same answer regardless of which process you use are candidates for structural claims about the problem itself. These are features of the constraint geometry, not your process.

Changes across variants

Process-side · procedural

Results that shift when you change the process are tied to the interaction between your method and the problem. The diagnostic tells you which failures are structural and which your process can actually fix.

Submit the intake.

The intake

Open the structured intake.

You don't need to translate your problem into our vocabulary. If you only know the failure pattern and the decision that depends on it, start there.

Or email

Email if a conversation makes more sense.

If your situation is unusual or you need to discuss confidentiality first, email directly.