How It Works
A conversational front end with a much more structured system underneath.
TrialFinder is meant to feel easy to start, especially for people who do not have a polished medical summary. Under the surface, the product tries to behave more like a controlled intake and search-building workflow: records first, disease-aware follow-up next, explicit filters after that, registry retrieval after that, plain-language result summarization after that, and human verification at the end.
Documents before more questions
If records are uploaded, the system reviews them first so the user is not asked to repeat details already present in pathology, labs, visit notes, or imaging summaries.
LLMs inside a constrained workflow
The conversational layer uses Anthropic models, but the output is forced back into structured draft fields instead of being treated as a free-form medical answer.
The search stays inspectable
The user sees an editable search state. The final output is not a hidden prompt; it is a transparent trial-search draft that can still be changed on the search page.
Architecture
Four layers keep the experience conversational but controlled.
The system does not rely on one generic chatbot prompt. It operates through distinct layers that keep the workflow grounded, inspectable, and easier to reason about.
Intake conversation layer
A patient or caregiver can start in plain language instead of needing a perfect medical form on the first turn.
- Free-text messages are accepted alongside uploaded records.
- The assistant keeps the interaction to one high-value question at a time.
- The tone stays patient-friendly while the content aims to be clinically useful.
Disease-state routing layer
As soon as enough evidence exists, the intake shifts into a disease-appropriate question path.
- The current routing supports oncology, diabetes, cardiology, neurology, autoimmune, and a general fallback.
- Each track changes the follow-up fields and next-best questions.
- The system can stay general longer when the diagnosis remains vague.
Deterministic draft layer
Every turn updates explicit search fields that can later be audited, edited, and reused on the search page.
- Condition, age, sex, location, radius, phase, status, sponsor, medications, and exclusions are tracked explicitly.
- Document findings and structured facts are preserved separately.
- Default recruiting status remains visible unless the user changes it.
Registry retrieval layer
ClinicalTrials.gov remains the source of record for the actual trial list rather than a purely generative answer.
- Long patient notes can be normalized into a more compact search phrase.
- Results are retrieved from the public registry and filtered again locally where preview data is available.
- The first result set is summarized in plain language so the user can keep refining the search in chat on the results page.
- The system never treats initial fit as final eligibility.
System Journey
From the first message to an editable shortlist.
Each stage below shows what the user experiences, what the system is locking in behind the scenes, and where the process can still go wrong.
Step 01
The case starts in natural language
The intake is intentionally chat-first because most people do not arrive with a clean medical summary or a complete list of trial-search facts.
What The User Feels
- The user can begin with a diagnosis, symptoms, a short note, or uploaded records.
- They do not need to know medical terminology perfectly to get started.
- The chat feels conversational instead of like a long intake form.
What The System Locks In
- The turn is validated so it contains text, files, or both.
- Existing state is preserved so later questions can build on earlier answers.
- The system starts tracking a structured search draft immediately, even if it is still sparse.
Where It Can Break
- A symptom-only opening can delay precise routing.
- No message and no file means the intake cannot proceed.
- A broad opening may produce a broad first draft.
Step 02
Records are processed before more questioning
If the user uploads documents, the application is designed to review those first so clinically useful facts can be extracted before asking the next question.
What The User Feels
- The assistant acknowledges that records are being reviewed.
- The conversation should skip questions that strong records already answer.
- Extracted record findings begin appearing in the intake summary.
What The System Locks In
- PDFs, text files, structured documents, and common images are prepared for local extraction and model review.
- Attachment summaries are stored for each file.
- Document findings are written into the search draft for later use.
Where It Can Break
- Scanned or blurry records may be weakly readable.
- Partial or conflicting documents should trigger clarification instead of certainty.
- Unsupported or oversized files are skipped conservatively.
Step 03
The system selects a disease track
Once the disease family becomes identifiable, the intake stops behaving generically and begins asking more specialty-appropriate questions.
What The User Feels
- Cancer cases quickly move toward stage, subtype, biomarkers, and prior therapy.
- Diabetes cases move toward type, control, meds, and complications.
- Other conditions follow their own more relevant field sets.
What The System Locks In
- The router evaluates the message, draft, and uploaded evidence together.
- It selects a specialty label, confidence level, and trigger evidence.
- It also generates a shortlist of the next most useful questions.
Where It Can Break
- Rare disease wording may require an extra clarification turn.
- Comorbidities can initially pull the route toward the wrong specialty family.
- The general fallback remains necessary when evidence is still weak.
Step 04
Follow-up questions aim for clinical relevance
The chat is designed to feel simple for the user, while the underlying question strategy aims to collect trial-relevant facts the way a careful coordinator would.
What The User Feels
- The assistant asks one high-value question at a time.
- Questions stay patient-friendly even when they target medically important details.
- The conversation should avoid re-asking what the records already established.
What The System Locks In
- Anthropic models are routed by complexity and attachment load.
- The assistant is required to return a strict structured payload rather than a loose essay.
- Missing fields, rationale, specialty signals, and draft updates are returned together.
Where It Can Break
- A user may answer broadly instead of giving a concrete fact.
- Malformed or incomplete model output must be handled defensively.
- Even strong questions cannot recover facts that the user or documents do not provide.
Step 05
The draft becomes a real search profile
This is the control point of the product. The system keeps converting the conversation into explicit trial-search fields instead of leaving everything inside the chat transcript.
What The User Feels
- The intake surface shows a visible search focus and extracted record facts.
- The user can tell the system is building something concrete, not just chatting.
- The later search page opens with actual filters rather than a black box.
What The System Locks In
- Condition, keywords, medications, exclusions, and filters are merged across turns.
- Duplicate facts are collapsed into a cleaner structured draft.
- Conflicting details can remain visible as things to verify rather than being silently discarded.
Where It Can Break
- Missing age, geography, or sex reduces search precision.
- A still-incomplete draft can produce a broad result set.
- Narrow biomarker or line-of-therapy assumptions can shrink the list too early.
Step 06
The registry search is executed and refined by people
When enough structure exists, the system turns the draft into explicit query parameters, searches the public registry, summarizes the first pass, and keeps the guide available for follow-up refinement.
What The User Feels
- The search page opens with visible, editable filters.
- The guide summarizes what came back and stays available so the user can keep chatting after search.
- The shortlist can be reviewed, saved, and narrowed manually or through follow-up chat.
- Trial details and investigator views remain available for deeper diligence.
What The System Locks In
- ClinicalTrials.gov is used as the authoritative trial source.
- Results are filtered again where the returned preview data permits.
- The current result set is summarized into a plain-language overview tied to the active filters.
- Search-follow-up chat turns are pushed back into structured criteria and rerun on the same page.
- The application keeps signaling that final screening belongs to clinicians and study teams.
Where It Can Break
- Registry data can lag behind real-world site status.
- Result summaries reflect the current search pass, so they change when filters or registry data change.
- A promising trial may still fail on full eligibility review.
- The final shortlist still needs clinician and coordinator confirmation.
Disease Tracks
The intake should not ask the same questions for every disease.
As soon as the disease family is identifiable, the next questions shift toward the facts with the highest search value for that condition.
Oncology / Cancer
- Cancer type and tissue of origin
- Stage, recurrence, and metastatic status
- Biomarkers, receptor status, or genomics
- Prior lines of therapy and current treatment state
Diabetes / Metabolic
- Type of diabetes or metabolic subtype
- Recent HbA1c or control pattern
- Current medication, insulin, or CGM/device usage
- Complications and recent hypo- or hyperglycemia
Cardiology
- Heart-failure versus arrhythmia phenotype
- Recent imaging or ejection fraction
- Functional limitation and symptom burden
- Procedure, device, and medication history
Neurology
- Exact syndrome or suspected subtype
- Motor, cognitive, sleep, or autonomic features
- Disease course and day-to-day burden
- Medication and specialty treatment history
Autoimmune / Inflammatory
- Exact diagnosis and organ system involved
- Current activity or flare severity
- Antibodies, markers, or lab evidence
- Immunosuppressive or biologic exposure
General fallback
- Most specific diagnosis available
- Age, sex, and travel radius
- Major therapies or must-avoid exclusions
- Any practical search constraints
Failure Modes
The happy path matters, but the failure paths matter more.
This product only stays trustworthy if it is explicit about where uncertainty enters the workflow and how the system should respond when confidence drops.
Records are incomplete, blurry, or conflicting
What It Looks Like
The upload may be hard to read, missing pages, or disagree with the user’s summary.
How The System Should Respond
The system should keep the interpretation conservative, surface the uncertainty, and move to clarification rather than pretending confidence.
Best Next User Move
Upload a clearer record, paste the key result manually, or confirm which source is newer and more accurate.
The diagnosis is still too vague
What It Looks Like
A user may start with a symptom description or a broad label that is not yet specific enough for precise routing.
How The System Should Respond
The intake remains in a general track and asks narrowing questions until the disease family is clearer.
Best Next User Move
Provide the most specific diagnosis or report wording available, even if it is only one line from a note.
The result list is too broad
What It Looks Like
Missing stage, biomarker, therapy line, geography, or age details can produce a long and noisy trial list.
How The System Should Respond
The assistant keeps prioritizing the next questions with the highest filtering value while the search page remains editable.
Best Next User Move
Add the details with the most narrowing power first: subtype, stage, biomarker, treatment history, travel radius, or age.
The result list is too narrow
What It Looks Like
A specific biomarker, phase, geography, sponsor, or line-of-therapy assumption may leave few or zero results.
How The System Should Respond
The structure stays visible so the user can relax the assumptions instead of starting over.
Best Next User Move
Widen geography, broaden phase or status, or move one level up in disease specificity before narrowing again.
Registry status does not match site reality
What It Looks Like
A registry listing can look open while a local site is paused, full, or not actively screening.
How The System Should Respond
The application treats the registry as the source list but not as final proof of availability or eligibility.
Best Next User Move
Use the trial detail page, site contact information, and clinical team review before assuming a trial is truly open.
Apparent fit is not final eligibility
What It Looks Like
A trial can look promising at the shortlist stage and still fail on full inclusion and exclusion review.
How The System Should Respond
The system keeps framing its role as discovery and preparation rather than final medical authorization.
Best Next User Move
Use the platform to prepare a stronger shortlist, then verify with clinicians and the study team.
Operating Boundaries
What the product is trying to be, and what it is not trying to be.
This matters for trust. TrialFinder should help people reach a stronger shortlist faster, but it should not pretend to be the final source of truth.
What it should do
- Collect trial-relevant facts in a patient-friendly way.
- Use uploaded records to reduce repetitive questioning.
- Produce an inspectable and editable search draft.
- Help users reach a better shortlist before contacting study teams.
What it should not do
- Pretend it can replace the study team’s eligibility review.
- Treat registry status as final proof of local site availability.
- Invent medical facts that the user or records did not provide.
- Frame the shortlist as diagnosis, treatment advice, or authorization.
Next Move
Open the intake or jump straight to search.
The guided intake is best when the case is messy or the records need interpretation first. The search page is best when the user already knows the filters they want.