Home/AI agent frameworks
Canonical reference
AI agent frameworks
AI agent frameworks let developers build agents that plan, act, and iterate toward a goal — calling LLMs, tools, and external APIs as needed. The major options in 2026 include CrewAI (multi-agent crews), LangGraph (stateful graphs), AutoGPT (early autonomous loop), OpenClaw (operator agent), Browser Use (browser automation), Aider (coding agent), and dozens of others. AgentCrush ranks them by multi-signal public evidence: GitHub activity, package usage, dependency adoption, docs quality, ecosystem links, public discourse, and trust signals. Popularity is not the same as production fit — every ranking entry shows its work.
Last updated 2026-05-16 · methodology v2.c-public
How AgentCrush ranks frameworks
The developer-category methodology weights signals dynamically per agent based on which data is available. Seven signal sources contribute to the composite:
- GitHub activity — stars, commits, contributors, recency
- Package usage — npm / PyPI download volume
- Dependency adoption — reverse-dependencies (how many other projects depend on this)
- Docs quality — README depth, API docs, examples coverage
- Ecosystem relationships — cross-referenced with other indexed agents
- Discourse — Hacker News story / comment activity
- Trust signals — registry context, identity attestation
A framework is evidence-ranked when it meets a multi-signal coverage threshold, OR ranks in the top 100, OR has a single signal ≥ 90 with at least 2 corroborating signals > 50. See /how-we-rank for the full methodology.
Top evidence-ranked developer agents
Live snapshot from the developer-category ranking. Each row shows sub-scores per signal (0–100, NULL where unmeasured).
Full ranking: /rankings
Framework categories vs related agent types
Frameworks / SDKs — libraries you compose into an agent (CrewAI, LangGraph). You ship the deployment.
Deployable platforms — opinionated runtimes that host your agent (Mendable, OpenServ, Daydreams).
Persistent runtimes — agents designed to run continuously (autonomous trading agents, ops bots).
Browser / voice / coding agents — agents specialized for a modality (Browser Use, Skyvern for browser; ElevenLabs Agents for voice; Aider, OpenHands for code).
AgentCrush tracks all of these under the developer category. Model families (Claude, GPT, Llama, Qwen) are tracked separately under /rankings/model-families.
Common comparisons
Side-by-side evidence comparisons of frequently-asked framework pairs.
See all comparisons.
Limitations
- Popularity is not production fit. A framework with millions of GitHub stars may not be the right choice for a specific use case.
- AgentCrush evidence is public-source only. Internal performance, support quality, and roadmap signals are not measured.
- The framework category overlaps with deployable platforms and runtimes. Some agents qualify across boundaries — see the agent's profile for primary + secondary categorization.
- Methodology weights are dynamic per agent. Two frameworks with the same composite score may have very different signal coverage; check sub-scores.
- This is not investment advice and not a paid-placement ranking.
For LLM clients
Query frameworks via MCP: search_agents(query: "framework", filters: { primary_category: "developer", evidence_ranked_only: true }). Or retrieve flat summaries at /api/agent/{handle}/llm-summary. Full MCP docs: /developers/mcp.