CatScan - May Diary

Join Trial or Access Free Resources

The month begins with a new cohort of students joining the program and meeting a clear reality: in many villages, a cataract check can cost an entire day’s travel and lost wages. The team frames the business problem in plain terms—early screening must happen close to home, be simple to use, and lead to timely referrals. With that north star, students map stakeholders (community health workers, ophthalmologists, patients) and agree on a single promise: reduce the distance between concern and care.

To build a shared foundation, the cohort studies what a cataract is, how it forms in the lens, and why early detection matters. Short primers and doctor-reviewed notes translate medical terms into field-ready language. This clinical grounding shapes product choices: every screen, every label, and every instruction must be understandable to a volunteer using a basic Android phone outdoors.

Students then open the first datasets. They scan filenames, labels, and image quality, noticing common issues—glare, partial eyelids, and off-center eyes. They also spot a risk: generic models can drift toward skin background rather than the pupil. In response, the team proposes a “pupil-first” pipeline: capture with a guidance ring, confirm alignment, crop the pupil, then predict. The approach keeps the model focused on the right signal and makes training and evaluation more consistent.

Organization follows quickly. A simple directory convention is drafted; a lightweight script is planned to standardize names and flag unusable images. A “golden set” of clean samples is earmarked for repeat checks as the pipeline improves. The students keep the app experience minimal: large cues, short text, and an offline-friendly flow that can sync later.

Next Tasks (from May planning)

Create a small “golden set” for regression testing across devices and lighting.

Finalize a consistent folder/filename schema; implement a one-click “clean & sort” script.

Prototype the pupil guidance ring and crop confirmation screen.

Draft field SOPs (distance, angle, glare reduction) and a “usable image” checklist.

Define pilot metrics: time per screen, unusable-image rate, agreement with clinician review.

More Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram