Technical
Making unclutr photos feel fast at scale: a technical deep dive with a user lens
Last updated: March 27th, 2026
Coming in version 2.3: The changes below will ship in the upcoming unclutr photos 2.3 release.
What this post covers
- Main-thread scan updates, feature extraction, grid thumbnails, and large-set grouping
- Why each change matters from a user perspective—not just on paper
- Tradeoffs, guardrails, and what we’re measuring next
When people use unclutr photos, they’re not thinking about algorithmic complexity, PhotoKit request queues, or Vision feature extraction.
They’re trying to do one thing: clean up their photo library quickly and confidently.
That user expectation shapes how we build the app.
Our codebase follows a few core principles:
- Do expensive work off the main thread so the UI stays interactive.
- Cache aggressively where repeat work would otherwise dominate.
- Prefer progressive rendering (show something useful early, refine later).
- Scale behavior with dataset size so “works for 200 photos” doesn’t collapse at 20,000.
This optimization cycle focused exactly on that: preserving detection quality while making large scans and dense grids feel smoother.
The bottlenecks we targeted
From the scanner and review pipelines, we found four practical pain points:
- Too much main-thread churn during scanning
Progress and skipped counters were being pushed to the main thread extremely frequently. - A synchronous image path in the hot extraction loop
Feature extraction relied on per-asset synchronous image fetch behavior that throttled throughput. - Thumbnail request pressure in large grids
Grid cells could trigger extra fallback requests, increasing request/cancel churn under fast scroll. - Quadratic grouping costs on large sets
Similarity grouping compared all pairs, which grows quickly and hurts large-batch responsiveness.
What we changed (and why)
1) Smoothed scan-loop updates to reduce UI thread pressure
In PhotoScannerViewModel, we moved from near per-asset UI updates to throttled progress publishing and batched skipped-asset commits.
Why this matters
From the user’s perspective, this removes micro-jank during scanning. The progress bar still moves correctly, but the app avoids drowning the main thread in update traffic.
2) Reworked photo feature extraction to avoid hot-loop sync image dependence
We replaced the old sync-image extraction path with a more direct pipeline:
- request image data
- downsample once for feature extraction
- run Vision feature print generation
- cache the result
We also removed redundant per-asset library existence checks in the same hot path and relied on feature cache presence where appropriate.
Why this matters
This improves scan throughput consistency, especially on larger sets and mixed media scenarios, while keeping feature quality aligned with existing behavior.
3) Reduced thumbnail churn in SwiftUI grids
In PhotoThumbnailView we changed thumbnail delivery strategy to be more scroll-friendly:
- switched to opportunistic delivery mode for earlier useful frames
- reduced requested base size to match grid reality better
- removed delayed duplicate fallback requests that amplified churn during rapid scrolling
Why this matters
For users, this means less stutter while browsing and fewer “loading fights” in dense photo grids.
4) Introduced bounded pair-pruning for large-set grouping
For both photo-library and local-file grouping, we kept exact pairwise behavior for smaller sets, but for larger sets we enabled bounded comparisons:
- neighbor-span limit
- time-window limit
This reduces pair explosion in very large runs while preserving strong practical grouping quality for user workflows.
Why this matters
Users care that a scan finishes in a reasonable time more than they care that every mathematically possible distant pair was compared. This is a deliberate product tradeoff: faster completion and better perceived responsiveness for real-world cleanup sessions.
What users should feel now
If we did this right, users won’t notice “optimization work.” They’ll notice outcomes:
- scans start and progress more smoothly
- scrolling and reviewing feels less jumpy
- large libraries remain usable instead of degrading sharply
- cleanup sessions feel shorter and more predictable
That’s the real KPI in consumer cleanup tools: trust through responsiveness.
Tradeoffs and guardrails
These changes intentionally balance correctness and speed:
- We did not claim synthetic benchmark wins without trace data.
- We introduced pair-pruning only when dataset size justifies it.
- We preserved existing behavior for smaller sets where exactness is cheap.
- We kept the architecture consistent with existing cache-first patterns.
What’s next
The next step is measurement-backed iteration:
- add lightweight phase timers (extract vs group vs render)
- record scan-size buckets to evaluate pruning thresholds
- run Instruments passes on representative libraries
- tune bounds from observed behavior, not assumptions
In short: this pass strengthens the app’s foundation for scale, while keeping the product promise intact:
help people declutter fast, without friction.