We help capital deployers and benchmark teams keep a live, defensible record of what's happening across the portfolio. The edge cases surfaced. The unsanitized data caught. The theory of change tested against what the evidence actually says.
Some engagements run end-to-end with our team; others run alongside your analysts, with us configuring the methodology and standing behind the evidence. Either way, the deliverable carries its sources with it.
A multi-entity index built end-to-end: rubric, scoring, source ingestion, analyst review, published report. Built once, re-run every cycle, with the prior cycle as live precedent.
Example engagement FAIRR, Coller Foundation Initiative. 768-question protein-producer index, multi-source ingestion, citation-linked extractions.
Re-assess companies, programs, or grantees against your methodology. We keep a living trajectory per entity, so the question “what changed since last cycle?” has a sourced answer. Lightweight surveys collect the few fields only the recipient can answer — everything else is pulled from filings and prior cycles, so the reporting load on grantees and portfolio companies stays small.
Example engagement TriLinc Global. Cohort assessment plus a private record the deal team queries mid-conversation, retained across cycles.
A citation-linked report that builds an over-time trajectory of impact — leveraging archived web filings, application data, and field notes so every claim has a source the reader can open.
Example engagement Santa Clara · Miller Center for Global Impact. Thirty years of program documentation structured into a queryable corpus, with citation-linked reporting on cohort outcomes over time.
Whether it’s a single cohort or an ongoing portfolio, the work moves through the same four steps. The deliverables change. The sources stay attached.
We adopt your existing methodology, or build one with you. Every question, every score, every edge case, written down.
Does the enterprise’s primary activity directly produce the intended outcome?
Every filing, survey, board minute, and field note is attached to the holding it pertains to, so any future answer can cite back to it.
A cited answer is drafted. A named analyst reviews, contests, and locks each one. The trail is what gets shipped.
How many units of solar lights were sold in the East Africa region in FY25?
218,400 units sold across Kenya, Uganda, and Tanzania — from the distributor sales export1. Tanzania figure includes a Q4 returns adjustment confirmed on the Mar 14 field review2. ! Q4 financials report 204,1003 — gap surfaced for the analyst to resolve.
Anveo’s “Regional” tab excludes Tanzania returns. They mis-tag the returns column as regional when it’s actually post-cycle — this is an Anveo-specific quirk, going back to FY23.
Logged as edge case for Anveo Energy: “Regional” tab excludes post-cycle returns. Will apply to future Anveo Energy questions that cite Q4_financials.xlsx.
Most of our buyers have already tried ChatGPT on a sustainability report or a 10-K. The answer looked plausible and couldn’t be defended to an IC. Here’s the same question, asked once and answered the way an external reviewer would expect.
Have we already scored this company, and what’s changed since the last cycle?
Engagements connect to Claude and ChatGPT directly, so your team can interrogate the cycle in the chat they already use. The question your team asks at 9am is the same record an external reviewer can audit at 5pm.
Coller Foundation Initiative
Seafood Traceability Index
Global research partnership
Internal-team enablement
The work shows up differently depending on the buyer, because the workflows are different enough that a one-size service would serve neither well.
You think in cohorts, methodologies, and published benchmarks. We configure the index, run the analysts, and ship the findings. Sources stay attached to every claim, inside the deliverable.
You think in deployment decisions and ongoing monitoring. We give you a living trajectory per entity, sourced, queryable, on demand, and a clean record to show the people you report to.
A defensible cycle isn’t an end in itself. It’s the substrate that lets the work on top of it actually move — programs get sharper, capital gets allocated with more conviction, and the companies and grantees being assessed get clearer signal on what to improve.
A cited, contestable record means member organizations engage with specifics, not headlines. The companies you assess know exactly where they fall short and what evidence supports it — the only condition under which a benchmark actually shifts behavior.
A living trajectory per holding means you’re not re-discovering the portfolio every quarter. You spot the company drifting from thesis before the next board meeting, not after. And the holdings themselves get a clearer mirror on where they stand — often the most useful thing you can give them.
Most of the friction in impact and sustainable finance lives in that gap. Closing it, one citation at a time, is how the field gets to a place where a portfolio review, a benchmark, or a theory of change can actually be trusted.
Engagements are flexible. If a pilot or proof-of-concept makes sense first, we'll do that without formalizing. Once it's clear we can deliver, we find an engagement structure that works.