Know your portfolio.
Prove your theory of change.

We help capital deployers and benchmark teams keep a live, defensible record of what's happening across the portfolio. The edge cases surfaced. The unsanitized data caught. The theory of change tested against what the evidence actually says.

What stands behind the answer
Cited sources
Every metric ties to a page, paragraph, or document. Public filings, private data rooms, field reports.
Named reviewers
A person on your team or ours signs each claim. Not a chatbot, a human on the line.
Locked cycles
Each cycle’s answer is fixed and comparable to the last. “What changed” has a real reference point.
Trusted by
01What we deliver

Work your reviewers can defend.

Some engagements run end-to-end with our team; others run alongside your analysts, with us configuring the methodology and standing behind the evidence. Either way, the deliverable carries its sources with it.

01 Benchmarks & cohort indices

A multi-entity index built end-to-end: rubric, scoring, source ingestion, analyst review, published report. Built once, re-run every cycle, with the prior cycle as live precedent.

Example engagement FAIRR, Coller Foundation Initiative. 768-question protein-producer index, multi-source ingestion, citation-linked extractions.

  • Industry benchmarks
  • Standard-setter cohorts
  • Custom indices
02 Diligence & portfolio review

Re-assess companies, programs, or grantees against your methodology. We keep a living trajectory per entity, so the question “what changed since last cycle?” has a sourced answer. Lightweight surveys collect the few fields only the recipient can answer — everything else is pulled from filings and prior cycles, so the reporting load on grantees and portfolio companies stays small.

Example engagement TriLinc Global. Cohort assessment plus a private record the deal team queries mid-conversation, retained across cycles.

  • Pre-investment diligence
  • Portfolio monitoring
  • Program evaluation
  • Recipient surveys
03 Citation-linked reporting

A citation-linked report that builds an over-time trajectory of impact — leveraging archived web filings, application data, and field notes so every claim has a source the reader can open.

Example engagement Santa Clara · Miller Center for Global Impact. Thirty years of program documentation structured into a queryable corpus, with citation-linked reporting on cohort outcomes over time.

  • Annual reporting
  • Quarterly trajectory
  • Audit-ready
02How an engagement runs

Four steps. The same standard of evidence underneath.

Whether it’s a single cohort or an ongoing portfolio, the work moves through the same four steps. The deliverables change. The sources stay attached.

Step 01 Week 01–02

Configure the rubric.

We adopt your existing methodology, or build one with you. Every question, every score, every edge case, written down.

Example rubric question, configured with your team
Who · Beneficiary How much · Scale Contribution Risk Team Thesis fit
Question 1 of 12

Does the enterprise’s primary activity directly produce the intended outcome?

  • 1Outcome unrelated1
  • 2Related, requires extras2
  • 3Reliable, dependent on use3
  • 4Verifiable at delivery4
Step 02 Rolling, per cycle

Ingest the evidence.

Every filing, survey, board minute, and field note is attached to the holding it pertains to, so any future answer can cite back to it.

Each source kept · provenance preserved
10-K filingp. 14–26
Sustainability rptPDF
Portco surveyCSV
Board minutesQ3 · 4 docs
Field notessite visit
KPI packscanned
Sources attached 0 across 11 companies
Step 03 Cycle body of work

Run, review, contest.

A cited answer is drafted. A named analyst reviews, contests, and locks each one. The trail is what gets shipped.

Question MRV-0142 · Anveo Energy · FY25
Q142 · Distribution · units sold Autofilled

How many units of solar lights were sold in the East Africa region in FY25?

Answer

218,400 units sold across Kenya, Uganda, and Tanzania — from the distributor sales export1. Tanzania figure includes a Q4 returns adjustment confirmed on the Mar 14 field review2. ! Q4 financials report 204,1003 — gap surfaced for the analyst to resolve.

Substantiation
  • 1 Source data · CSV distributor_sales_FY25.csv · rows 1142–1389 218,400
  • 2 Meeting · transcript Mar 14 field review · clarification from M. Owino 18:42
  • 3 Contradiction · XLSX Q4_financials.xlsx · tab “Regional”, cell D17 204,100
Comment thread on Q4_financials.xlsx2 replies
  1. JP
    J. ParkTue 11:02

    Anveo’s “Regional” tab excludes Tanzania returns. They mis-tag the returns column as regional when it’s actually post-cycle — this is an Anveo-specific quirk, going back to FY23.

  2. M
    MeriviaTue 11:04

    Logged as edge case for Anveo Energy: “Regional” tab excludes post-cycle returns. Will apply to future Anveo Energy questions that cite Q4_financials.xlsx.

00  |  ready
03The corpus behind the answer

Your methodology, prior cycles, and sources, held in one place so the answer can be defended.

Most of our buyers have already tried ChatGPT on a sustainability report or a 10-K. The answer looked plausible and couldn’t be defended to an IC. Here’s the same question, asked once and answered the way an external reviewer would expect.

An associate asks

Have we already scored this company, and what’s changed since the last cycle?

The question
From your team
Prior ruling
Your team scored this last cycle
Last cycle answer · locked Mar 2025
Reviewer · L. Stayton
Rubric v3 · unchanged
What changed
New filings since last cycle
FY25 10-K · filed 14 Feb
Sustainability report · new
Q3 questionnaire · returned
Site visit · Jan field note
Disagreement
Two filings, two numbers
10-K · 62%
Questionnaire · 84%
Flag raised · awaiting clarif.
Reviewer
A named analyst signs each claim
The answer
Cited, comparable, defensible
00  |  ready A question lands. Watch how the work routes it.
04In your existing chat

Query the work from Claude or ChatGPT between cycles. Get sourced answers back.

Engagements connect to Claude and ChatGPT directly, so your team can interrogate the cycle in the chat they already use. The question your team asks at 9am is the same record an external reviewer can audit at 5pm.

You · in Claude

 

Merivia · cited
  1. FY25 cohort · outstanding surveys   Cohort tracker · L. Stayton
  2. Anveo · replies in the last 7 days   Edge-case ruling #07
  3. Anveo · locked answers from last cycle   Last cycle · locked
Answer · in Claude

 

logged · sourced · replayable
05Selected work

Engagements where our team and yours did the work together.

06Who we work with

Two motions. The same standard of evidence.

The work shows up differently depending on the buyer, because the workflows are different enough that a one-size service would serve neither well.

ICP I · Research & benchmarking

NGOs, investor coalitions, standard-setters, foundations.

You think in cohorts, methodologies, and published benchmarks. We configure the index, run the analysts, and ship the findings. Sources stay attached to every claim, inside the deliverable.

  • One framework cycle as a fixed-fee engagement
  • Multi-framework cohorts, longitudinal trajectories
  • Findings + clarifications as the standard deliverable
ICP II · Capital deployers & diligence teams

Investors, family offices, DFIs, programmatic funders.

You think in deployment decisions and ongoing monitoring. We give you a living trajectory per entity, sourced, queryable, on demand, and a clean record to show the people you report to.

  • Diligence and monitoring on one record
  • Allocate with conviction, report faster
  • Replace quarterly questionnaires with a live record
07Why this matters

The record is the means. The change is the point.

A defensible cycle isn’t an end in itself. It’s the substrate that lets the work on top of it actually move — programs get sharper, capital gets allocated with more conviction, and the companies and grantees being assessed get clearer signal on what to improve.

For research & benchmark teams

Findings that carry weight beyond the report cycle.

A cited, contestable record means member organizations engage with specifics, not headlines. The companies you assess know exactly where they fall short and what evidence supports it — the only condition under which a benchmark actually shifts behavior.

For capital deployers

Allocate against evidence, not against the last questionnaire.

A living trajectory per holding means you’re not re-discovering the portfolio every quarter. You spot the company drifting from thesis before the next board meeting, not after. And the holdings themselves get a clearer mirror on where they stand — often the most useful thing you can give them.

For the field

Closing the gap between what gets reported and what actually happened.

Most of the friction in impact and sustainable finance lives in that gap. Closing it, one citation at a time, is how the field gets to a place where a portfolio review, a benchmark, or a theory of change can actually be trusted.

Put the work on a defensible footing

We'll jump on a call,
scope a first cycle,
and get to work.

Engagements are flexible. If a pilot or proof-of-concept makes sense first, we'll do that without formalizing. Once it's clear we can deliver, we find an engagement structure that works.

Book a call