June opens with a practical goal: turn May’s plan into a working, pupil-first pipeline. The first week focuses on data hygiene. A simple utility groups images into clean folders, fixes broken names, and flags duplicates. Students can now find, compare, and reuse images without guesswork.
Attention shifts to what a field worker will actually see. Strong sunlight, reflections on the cornea, eyelids in frame, and eyes that are slightly off-center. The team converts these observations into design rules. The capture screen shows a bold guidance ring, large buttons, and a retry option. Every step is short. Every label avoids jargon.
Mid-month, the “pupil-first” idea becomes concrete. The sequence is clear: upload the photo, center the iris with the ring, crop only the pupil region, then run the prediction. This keeps the model focused on the right signal and reduces noise from skin background or shadows. A small log records each attempt: usable, unusable, or needs re-capture.
Training for volunteers moves in parallel. The students write plain-language SOPs that explain how to hold the phone, how far to stand, and how to reduce glare. A one-page checklist defines a “usable image.” If the checklist fails, the app recommends a quick re-capture instead of a weak prediction.
Clinical alignment remains steady and brief. The project confirms clear result states—Likely Cataract, Uncertain, Clear—and what to do next in each case. Uncertain cases loop back to re-capture before referral. Pilot metrics are posted on the wall: time per screen, unusable-image rate, and agreement with doctor review. Progress becomes measurable.
By the last week, June has a rhythm. Cleaned data. A functioning pupil-centric flow. Field SOPs that volunteers can actually follow. A tiny “golden set” that verifies nothing broke after each change. The pipeline feels practical because it is built around the reality of rural work: short steps, clear cues, and reliable next actions.
June closes with confidence. The app captures better images. The model sees the pupil, not the background. Volunteers have guidance they can use in the sun, not only in a lab. The project is ready to connect image processing with on-device predictions and to plan a small field check with doctor oversight next month.