Repurposing should be good for patients and capital: human exposure already exists, biology is often tractable, and in many cases, the regulatory path can be clearer. Yet from our discussions with biotech companies and investors, it’s often not considered a scalable business once the upside is stacked against development costs. We don’t buy that inevitability. The real blocker is the cost of conviction, i.e. the money and months burned to find, validate, and package a hypothesis that can withstand expert, investor and regulatory scrutiny and we have to address this cost head on.
We built our Development Plan Agent (DPA) to shrink that cost without compromising evidence quality and compliance. The DPA orchestrates specialized AI agents that retrieve, score, and cross-validate literature, preclinical/clinical studies, labels and safety/PK. In collaboration with Patients and their Advocacy Groups (PAGs), our DPA produces blueprints for N-of-few clinical trials to identify early human signals.
This post is solely about the identification of clinical value. There will be a complementary post about financial valuation of drugs - about how repurposing can create outsized value in rare diseases - in the near future.
The Cost Stack DPA is Attacking
Feasibility of drug repurposing collapses when diligence looks like bespoke research. We built the DPA to focus on four cost buckets and remove friction between them:
Now, we aren’t experts at running sites or writing operational protocols (yet!). Those should still remain with relevant experts (or expert systems). But the beauty of our agentic framework is that if such a system exists, it is straightforward to pipe our outputs into it - the DPA outputs are decision‑grade and handoff‑ready.
The Agentic Framework
Our design motif is simple. Every cost bucket in drug repurposing reduces to the same loop: search across the right data, clean it, wire it into an algorithm, validate and then package. We do that across evidence search/triage; mechanistic plausibility; safety/PK; regulatory precedent and endpoint selection. In this context, we’ve turned a cost problem into a meta‑programming problem. Our agent motif is constant: a core algorithm, a set of data agents, a validation agent, and an orchestrator. The orchestrator runs tasks in parallel, logs prompts/models/sources, and stitches outputs into a reusable “Opportunity Dossier” with a crisp “What must be true” page.
Figure A: Agent Design Motif. |
This capability is not theoretical. Many of these components have already been built and tested. For one fibrosis-related disease, our DPA assembles a drug→target→pathway→disease evidence graph. For one candidate, the graph connects calcium‑signaling and neuro‑immune modulation to downstream cascades (e.g., NF‑κB/TGF‑β) implicated in fibrotic remodeling. Because the evidence is currently in silico, the dossier lists priority preclinical checks (e.g., TGF‑β reporter, fibroblast contraction, macrophage–myofibroblast co‑culture) and flags combination hypotheses with standard of care. The output is a decision‑grade Opportunity Dossier with outputs to be piped into preclinical data efforts before attempting clinical validation.
Figure B: Evidence Search. Data agents normalize data literature/labels/registries; core algorithms is our drug search algorithm; a critic scores quality and contradictions; outputs include a citation table, endpoint precedents, and “what‑must‑be‑true” |
For one DEE-related PAG, we evaluated the viability of repurposing candidates through an analysis of mechanistic plausibility. It normalizes targets, assembles a drug→target→pathway evidence graph, and scores the link to relevant phenotypes. The output is a ranked MOA hypothesis list, with citations, quality scores, and suggested confirmatory steps, which are used to prioritize which candidates move to investigator‑led evaluation. We’ll share a deeper dive into this use case in a separate post.
Figure C: MoA Analysis Motif. Same four columns; Use proprietary dataset to enrich evidence graph; Use customized AI to generate mechanistic hypotheses and validate; outputs are ranked MoA’s. |
In the pharma industry, evidence is everything. Drugs, whether new or repurposed, get approved only after submitting solid clinical evidence on safety and efficacy. But the standard way of collecting that evidence is inefficient, especially for the complex challenge of repurposing existing drugs. The fix lies in scaling up and building flexibility into the evaluation process. Relying heavily on manual human effort won't work. Instead, we need AI-driven systems with human oversight to make drug repurposing viable for investment. That's exactly what we're building.