Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

A Python framework called TradingAgents crossed 53,000 GitHub stars in four months. It simulates a full Wall Street trading firm using AI agents. And you can run it on your laptop.
A Python framework called TradingAgents crossed 53,000 GitHub stars in four months. It simulates a full Wall Street trading firm using AI agents. And you can run it on your laptop.
Most retail investors have spent the last few years watching hedge funds and quant desks quietly build AI-powered research infrastructure that simply wasn’t available outside the walls of a Bloomberg terminal subscription and a nine-figure AUM. That gap between institutional-grade analytical depth and what the average serious investor can actually access has always been one of the structural advantages the professional class holds over everyone else.
That gap just got a lot smaller.
A project called TradingAgents — started as a research paper out of UCLA in late 2024, quietly open-sourced shortly after — has been climbing GitHub’s trending list with remarkable speed. It crossed 53,000 stars and nearly 10,000 forks in roughly four months, ships under a permissive Apache 2.0 licence, and published version 0.2.4 on April 25th, 2026. The underlying architecture paper is available on arXiv for anyone who wants to go deeper.
The premise is deceptively simple: what if you stopped trying to build a single all-knowing AI that picks stocks, and instead modelled the way an actual hedge fund actually operates — with specialised roles, internal disagreement, and a defined approval chain?
TradingAgents doesn’t give you a black box that spits out buy or sell signals. It gives you a process — one that mirrors how serious institutional research desks are structured, and more importantly, one that produces fully traceable, auditable reasoning at every step.
The system runs four analyst agents in parallel, each attacking the same ticker from a different angle. The fundamentals analyst pulls recent filings and runs ratio analysis, arriving at an intrinsic value estimate. The sentiment analyst scrapes Reddit, X, and social signals to gauge near-term market mood. The news analyst monitors macroeconomic indicators and breaking events with price-moving potential. The technical analyst runs the standard toolkit — MACD, RSI, Bollinger Bands — and identifies chart patterns.
Critically, these four reports are not collapsed into a single blended score. The disagreement between them is preserved deliberately. As anyone who has watched a real analyst team at work knows, the friction between a strong technicals signal and a weak fundamentals picture is the signal. Averaging it away is how you lose edge.
Then it gets interesting.
After the analyst layer completes, a second layer of two researcher agents reads all four reports. One is structurally bullish. The other is structurally bearish. They are explicitly designed to disagree, and they debate across a configurable number of rounds — citing specific numbers from the analyst reports — before reaching any conclusion.
This is not a gimmick. The bull-bear debate structure forces the system to surface the best argument against a position before committing to it. That is exactly the kind of pre-mortem thinking that separates disciplined institutional research from confirmation-biased retail thesis-building.
The trader agent then reads the full debate transcript and proposes a position — including timing and sizing. A risk management team evaluates the proposal against volatility and liquidity constraints. Finally, a portfolio manager either approves or rejects the trade, with a written explanation either way.
Every step is logged. Every decision is readable.
Algorithmic trading frameworks are not new. Rule-based systems running moving average crossovers have existed for decades. Machine learning models outputting probability-weighted buy signals are commonplace in professional settings. Neither is particularly useful to a thoughtful independent investor, because neither shows its work.
TradingAgents sits in a different category entirely. It is not trying to find a mechanical edge in price data. It is trying to replicate the reasoning process that produces a defensible conviction — the kind of conviction that holds up under scrutiny because it explicitly engaged with the counterargument before arriving at a conclusion.
The decision log is particularly worth highlighting. Every completed analysis run appends its decision and full reasoning chain to a persistent log file. On the next run for the same ticker, the system fetches the realised return, computes alpha against the S&P 500, generates a one-paragraph reflection on what went right or wrong, and injects that history into the portfolio manager’s prompt. The system learns from its own track record. That is not a trivial capability.
The technical orchestration layer runs on LangGraph, which means every agent transition is a checkpointed node in a directed graph. If a run crashes mid-analysis, you resume from where it stopped rather than restarting from scratch. For anyone running long analytical sessions across multiple tickers, that matters.
There are three honest limitations to acknowledge before you get too excited.
First, running TradingAgents costs real money in LLM tokens. Four parallel analyst calls plus multiple debate rounds plus the trader, risk manager, and portfolio manager — that adds up per ticker. How much depends on which model provider you use and how many debate rounds you configure, but you should budget for it deliberately rather than being surprised later.
Second, the authors are explicit: this is a research framework, not a financial advisory service. The portfolio manager’s approval is not a signal to act. It is a structured output from a language model with no legal fiduciary obligation, no regulatory oversight, and no skin in the game. Used as a research and scenario analysis tool, it is potentially very powerful. Used as an autonomous trading system for real capital, it is something you approach with serious caution and extensive back-testing.
Third, the simulated exchange is exactly that — simulated. There is no live broker integration out of the box. If you want to connect it to actual order execution, you will need to build that bridge yourself.
None of these caveats diminish the significance of what the project represents. They are simply the parameters you need to understand before deciding how to use it.
TradingAgents supports essentially every major LLM provider — OpenAI, Google Gemini, Anthropic Claude, DeepSeek, Grok, and Ollama for fully local models. Setup is a git clone, a pip install, and an API key configuration. The CLI walks you through choosing your ticker, analysis date, provider, and debate round count interactively.
The goal is not to find a magic money machine. The goal is to understand whether this kind of structured, multi-perspective AI research process produces genuinely useful analytical output for a contrarian investor who already has a thesis framework and is looking for a tool to stress-test it.
The honest answer to whether any AI framework reliably beats the market is almost certainly no. But the more interesting question is whether a tool that forces you to engage seriously with the counterargument to your own position — every single time, before you commit — makes you a more disciplined investor over time.
That question is worth investigating properly.
The repo is at github.com/TauricResearch/TradingAgents. The arXiv paper is 2412.20138. Go read it.