A coalition of nonprofit clinics tested AI-assisted triage to handle high patient volumes in neighborhoods with chronic staffing shortages. Intake forms were pre-scored for urgency, then routed to nurses for confirmation before appointments were assigned.
According to program data reviewed by Becon, median intake time fell from 18 minutes to 11 minutes across participating sites. The largest gains came in clinics that already had standardized triage scripts and bilingual support staff.
Medical directors emphasized that no patient was auto-assigned without human verification. They described the model as a decision support layer rather than a replacement for clinical judgment, especially for patients with multiple chronic conditions.
Patient advocates raised concerns about transparency when intake recommendations changed. In response, clinics added plain-language explanations to discharge paperwork and gave patients a direct channel to request manual reassessment.
Healthcare
The pilot also surfaced operational issues unrelated to AI. Sites with outdated scheduling software saw smaller improvements, suggesting that gains depend on the broader workflow and not only the triage model.
Health system leaders are now considering expansion, but only with clear guardrails: quarterly bias audits, mandatory nurse sign-off, and public reporting on escalation rates by language and age group.









