Research Explained

AI papers, made accessible.

Interactive explainers for AI research papers. Every technical term defined, every concept grounded in real-world analogy, with motion graphics for the ideas that are hardest to picture from text alone. Built with Claude Code and HyperFrames.

Papers
ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory
How to make AI agents get better at their jobs the more tasks they see — by distilling reusable strategies from their own successes and failures, with no extra training.
September 2025 Google Cloud AI Research + UIUC + Yale
How Much Do Language Models Memorize?
GPT-style models store ~3.6 bits per parameter. This paper measures exactly how much models memorize, explains double descent, and predicts when privacy attacks fail.
June 2025 Meta FAIR + Google DeepMind + Cornell + NVIDIA
LoRA: Low-Rank Adaptation of Large Language Models
How to fine-tune a 175-billion-parameter model by training only 0.01% of the weights — with no inference latency penalty.
June 2021 Microsoft
Avatar V: Scaling Video-Reference Avatar Generation
How HeyGen generates talking avatar videos that preserve identity, talking style, and micro-expressions from a short reference video.
April 2026 HeyGen Research
Fast KV Compaction via Attention Matching
How to shrink a language model's memory by up to 50x in seconds, plus Ramp Labs' Latent Briefing for efficient multi-agent context sharing.
February 2026 MIT + Ramp Labs
Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory
How to give AI agents persistent memory across conversations — 91% faster and 90% cheaper than processing full context.
April 2025 Mem0
No papers match your search.