Skip to main content
The Appeeky API uses an in-memory TTL cache with request deduplication to minimize upstream calls to Apple services and deliver fast response times. No external cache infrastructure (Redis, Memcached, etc.) is required.

Overview

Every expensive upstream call — Apple RSS feeds, iTunes Search, iTunes Lookup, and App Store scraping — passes through a shared MemoryCache layer. The cache stores results for a configurable TTL and automatically evicts stale entries. When multiple requests arrive for the same cache key simultaneously, only one upstream call is made. All concurrent callers wait for the same in-flight promise (request deduplication), preventing thundering herd problems.

Cache TTLs

Data TypeTTLRationale
iTunes Lookup (per app)30 minApp metadata changes infrequently
iTunes Lookup (batch)30 minSame as single lookup
IAP Scraping60 minSlow to fetch (~5s per app), rarely changes
Apple RSS Chart (per country)15 minCharts update every few hours on Apple’s side
Country Rankings (per app)10 minDerived from cached charts; short TTL for freshness
App Intelligence (full)15 minAggregates many sources; balances speed vs freshness
New Releases30 minExpensive (55+ iTunes Search queries); data is stable
Discover: New #1 Club30 minBased on RSS chart; updates infrequently
Discover: Aggregated30 minCombines all discovery sections

Expected Response Times

EndpointCold (1st call)Warm (cached)
GET /v1/health< 10 ms< 10 ms
GET /v1/apps/:id~300 ms< 5 ms
GET /v1/apps/:id/intelligence3–8 s< 5 ms
GET /v1/apps/:id/country-rankings2–4 s< 5 ms
GET /v1/new-releases10–20 s< 5 ms
GET /v1/discover/new-number-12–4 s< 5 ms
GET /v1/discover12–25 s< 5 ms
GET /v1/categories/:id/top1–3 s< 5 ms

Architecture

Request → MemoryCache.get(key, ttl, fetcher)
           ├─ HIT       → return cached data instantly (< 5 ms)
           ├─ MISS      → call fetcher(), store result, return
           └─ IN-FLIGHT → wait for existing promise (dedup)
Cache is per-process. The cache lives in Node.js memory. Restarting the server clears all cached data, and the next request for each key will trigger a fresh upstream fetch.
No external dependencies. The caching layer has zero infrastructure dependencies — no Redis, Memcached, or database. This keeps the deployment simple and the operational overhead minimal.

Key Design Decisions

  1. Full scan for new releases — The new-releases endpoint always scans all 55+ search terms to maximize the discovery pool, then caches the full result. The 30-minute cache makes the cold-start cost (10–20s) acceptable.
  2. Shared chart cache — Country rankings, discover (#1 club), and category top charts all read from the same Apple RSS chart cache (15 min TTL). Repeated calls across these services never duplicate upstream requests.
  3. Composable caches — Higher-level endpoints like getAppIntelligence call lower-level services (lookupApp, scrapeIAPs, getCountryRankings), each with its own cache. Even a “cold” intelligence call benefits from warm sub-caches if any component was recently fetched.
  4. Request deduplication — If 10 users hit /v1/new-releases at the same time and the cache is empty, only one upstream fetch runs. The other 9 requests wait for the same promise to resolve, then all receive identical data.