Overview
Every expensive upstream call — Apple RSS feeds, iTunes Search, iTunes Lookup, and App Store scraping — passes through a sharedMemoryCache layer. The cache stores results for a configurable TTL and automatically evicts stale entries.
When multiple requests arrive for the same cache key simultaneously, only one upstream call is made. All concurrent callers wait for the same in-flight promise (request deduplication), preventing thundering herd problems.
Cache TTLs
| Data Type | TTL | Rationale |
|---|---|---|
| iTunes Lookup (per app) | 30 min | App metadata changes infrequently |
| iTunes Lookup (batch) | 30 min | Same as single lookup |
| IAP Scraping | 60 min | Slow to fetch (~5s per app), rarely changes |
| Apple RSS Chart (per country) | 15 min | Charts update every few hours on Apple’s side |
| Country Rankings (per app) | 10 min | Derived from cached charts; short TTL for freshness |
| App Intelligence (full) | 15 min | Aggregates many sources; balances speed vs freshness |
| New Releases | 30 min | Expensive (55+ iTunes Search queries); data is stable |
| Discover: New #1 Club | 30 min | Based on RSS chart; updates infrequently |
| Discover: Aggregated | 30 min | Combines all discovery sections |
Expected Response Times
| Endpoint | Cold (1st call) | Warm (cached) |
|---|---|---|
GET /v1/health | < 10 ms | < 10 ms |
GET /v1/apps/:id | ~300 ms | < 5 ms |
GET /v1/apps/:id/intelligence | 3–8 s | < 5 ms |
GET /v1/apps/:id/country-rankings | 2–4 s | < 5 ms |
GET /v1/new-releases | 10–20 s | < 5 ms |
GET /v1/discover/new-number-1 | 2–4 s | < 5 ms |
GET /v1/discover | 12–25 s | < 5 ms |
GET /v1/categories/:id/top | 1–3 s | < 5 ms |
Architecture
Cache is per-process. The cache lives in Node.js memory. Restarting the server clears all cached data, and the next request for each key will trigger a fresh upstream fetch.
No external dependencies. The caching layer has zero infrastructure dependencies — no Redis, Memcached, or database. This keeps the deployment simple and the operational overhead minimal.
Key Design Decisions
- Full scan for new releases — The new-releases endpoint always scans all 55+ search terms to maximize the discovery pool, then caches the full result. The 30-minute cache makes the cold-start cost (10–20s) acceptable.
- Shared chart cache — Country rankings, discover (#1 club), and category top charts all read from the same Apple RSS chart cache (15 min TTL). Repeated calls across these services never duplicate upstream requests.
-
Composable caches — Higher-level endpoints like
getAppIntelligencecall lower-level services (lookupApp,scrapeIAPs,getCountryRankings), each with its own cache. Even a “cold” intelligence call benefits from warm sub-caches if any component was recently fetched. -
Request deduplication — If 10 users hit
/v1/new-releasesat the same time and the cache is empty, only one upstream fetch runs. The other 9 requests wait for the same promise to resolve, then all receive identical data.

