ResoPrism
Three specialist agents search grants, papers, and news in parallel — ranked, cached, resilient.
Research discovery is a fan-out problem: grants, academic papers, and news each live in different places, require different query strategies, and decay at different rates. Any single synchronous search loses coverage. The goal was a system where three agents could run in parallel, their results could be intelligently merged and ranked, and repeat queries wouldn't waste time or tokens re-fetching the same sources.
- Split retrieval across three specialist agents — grants, papers, news — each with its own search strategy and source set, running concurrently via LangGraph's parallel node execution.
- Added a deterministic ranking layer on merge: recency, source credibility, and relevance score are weighted and combined, so the output order is explainable, not probabilistic.
- Used MongoDB Atlas as the cache and result store — on a cache hit the orchestrator skips the relevant agent entirely, on a miss it writes results back for future queries.
- Built graceful degradation into the orchestrator: if one agent fails or times out, results from the remaining two are still ranked and returned rather than surfacing an error.
Parallel agents over sequential search
Sequential search across three sources would have been simpler to implement but 3x slower. LangGraph's parallel node support let us fan out without threading boilerplate — the orchestrator waits for all branches, then merges. Latency dropped from ~9 s sequential to ~3 s parallel.
Deterministic ranking, not LLM ranking
Asking an LLM to rank 30 results produces different orderings on the same query depending on context window position. We needed reproducible results for caching to be useful — a weighted score (recency × credibility × relevance) gave us that without another model call.
MongoDB Atlas over a vector DB for caching
We didn't need semantic cache — exact query matches were sufficient at hackathon scale. Atlas gave us flexible schema for heterogeneous result types (grants look different from papers), TTL indexes for staleness, and a single data store rather than two.
Ranked research results across three sources in ~3 seconds. Demoed live at the Cerebral Valley hackathon with concurrent users hitting the system. Cache hit rate reached ~60% within the demo session.
Graceful degradation works, but the merge logic doesn't know an agent failed — it just works with fewer inputs. I'd add an explicit 'source missing' signal in the ranked output so users know they're seeing partial results, not a complete picture.