Why we built Tradoki — an AI trading academy, not a signals room
The ten-month story of why Tradoki exists, what we tried first, what we threw away, and the specific decisions that shaped the company we ended up running.

When I first started sketching what would eventually become Tradoki, in the spring of 2025, the document on my desk was a five-page spec for a hybrid product — a course, a signals room, a community Discord, and a monthly subscription that w
When I first started sketching what would eventually become Tradoki, in the spring of 2025, the document on my desk was a five-page spec for a hybrid product — a course, a signals room, a community Discord, and a monthly subscription that would unlock all three. The spreadsheet for that product closed comfortably. The spec was approximately what every successful trading-education company in the consumer market was running. I drafted it because it was the obvious play. Then we threw it away, rebuilt the company around the part of the original spec that was hardest to monetise, and ended up with a product that takes longer to sell, harder to scale, and is the only one I am willing to sign my name to.
This is the ten-month case study of how that happened — what we tried, what we threw out, the specific decisions and dates that shaped what Tradoki became, and what the early cohort outcomes have looked like. It is told in the third person where the desk is the actor, and in the first person where I was. None of it is investment advice and none of it is a marketing pitch.
The starting spec, and what was wrong with it
The original five-page spec, dated 2025-04-12, had four product surfaces:
- An eight-week core curriculum. Pre-recorded video, structured weekly cadence, with workbook exercises. Standard format for a course product.
- A live signals room. Two to four ideas per day on a defined instrument list, posted to a Telegram channel, with entry/stop/target.
- A members-only Discord. Community engagement, peer learning, reaction reads.
- A weekly live session. Office hours, Q&A, live market reads.
The spreadsheet said this product would have an LTV/CAC of around 4.2 at our planned price point, payback inside 90 days, and gross margins above 80%. By the standards of the consumer education category, that is a defensible business.
I sat with the spec for about six weeks. The thing that bothered me about it was not the spreadsheet. It was that I could not articulate, in writing, what skill the median customer would have at the end of twelve months that they did not have at the start, if they used the product the way the marketing implied they would use it.
The customer who used the signals room would have a year of execution experience following someone else's analysis on a retail platform. They would not have the underlying skill to read a market on their own. The customer who used the curriculum but not the signals room would have the skill, but also the constant pull of an open signals channel that distracted from the structured work. The customer who treated the Discord as the primary product would have a community membership and not much else.
Each individual surface was fine. The combination produced an average customer who did the easiest thing — follow signals — and never developed the skill the curriculum was supposed to deliver. I have written separately about why we will never sell signals; the short version of the decision is that I could not stomach building a product whose easy path produced an outcome I did not believe in.
The rebuild, dated June 2025
By 2025-06-22, the spec had been rewritten end to end. The new version had two surfaces:
- An eight-week structured curriculum, with mandatory weekly assignments graded by a human reviewer and a paper-trading requirement that the student had to complete before progressing.
- A cohort model, with fixed start dates, a defined peer group of 30–50 students per cohort, and a cohort lead who tracked progress and ran weekly synchronous sessions.
We removed the signals room entirely. We removed the always-on Discord and replaced it with a cohort-bounded discussion that closed at the end of the program. We removed the open community membership. The product surface area shrank by approximately 40%; the unit economics got meaningfully harder; the spec became something I could defend in writing as producing the skill it claimed to produce.
The first cohort ran from 2025-09-08 to 2025-11-02. Twelve students, hand-recruited from my network. The pricing was a fraction of what it would later become. The goal was not revenue; the goal was to verify that the structure produced the outcome.
What cohort 1 taught us
The headline number from cohort 1 was that nine out of eleven completers were still actively trading, profitably or at breakeven, six months after the program closed. The number is small. The cohort was small. The follow-up was self-reported. The standard caveats apply, and I would not lean on this single data point to make any product claim.
What was more informative than the headline was the texture of the follow-up. Every one of the eleven completers was journaling consistently. Every one had a written framework. Every one had configured their platform with daily and weekly loss limits. None had over-traded their framework in a meaningful way during the six-month window. The two who were down were down within the risk envelope their framework permitted; neither had blown up; both were profitable in the most recent month at the time of the check-in.
That texture was the actual product validation. The skill had transferred. The students were running the routine even when nobody was watching them. The question we had not been able to answer at the spec stage — would the structure produce the skill — had a preliminary yes.
The cohort also surfaced things the spec had wrong:
The weekly synchronous session was the most valuable surface, not the curriculum video. Students reported that the live read-throughs of recent sessions, with the cohort's own questions in the room, were where the discipline got internalised. The recorded curriculum was the prerequisite; the live cohort sessions were the multiplier. We doubled the live time in cohort 2.
The peer accountability was real and measurable. Students who paired up early in the cohort and exchanged journal entries had meaningfully better completion rates and meaningfully better six-month outcomes than students who did not. We made the peer pairing structural in cohort 2.
The paper-trading requirement was the right hill to die on. Three students pushed back during week four about the rule that no live capital was permitted until week six and a 30-session paper sample. Two of those three were among the strongest performers at the six-month check-in. The discipline was the point.
The eight-week duration was too short for one segment and right for another. Students with prior trading experience completed comfortably; complete beginners needed roughly an extra week to absorb the risk-of-ruin pillar and the journaling discipline. We added an optional "week zero" prerequisite for cohort 2.
Cohort 2 and the AI layer
Cohort 2 ran from 2025-12-08 to 2026-02-08, with 38 students. By this point we had added the AI tooling layer that gave the company its name — a set of internal applications that accelerated the work between decisions for both the cohort leads and the students.
The AI layer specifically:
- Drafts personalised feedback on student journal entries, which the cohort lead reviews and edits before sending. Reduces the cohort lead's per-student review time from ~25 minutes to ~8 minutes per week without reducing feedback quality.
- Summarises central-bank releases and high-impact macro prints into a 200-word brief that students read alongside the original release. Increases the rate at which students engage with the underlying release.
- Generates practice scenarios for the deliberate-practice plan — variations on a base setup with different regime contexts — that students work through during the program.
- Code-reviews student strategy scripts (Pine Script, mostly) against our static review checklist before a human review. Catches the obvious bugs (look-ahead, repaint, default tester settings) before the human reviewer's time is spent on them.
What the AI layer explicitly does not do:
- Generate signals.
- Recommend trades.
- Size positions.
- Make any decision that touches a market.
The asymmetry is the product principle. The AI accelerates the work between decisions; the human always takes the decisions. We have written about the broader pattern in why AI live trading bots blow up and in LLMs as research assistants, not traders.
The cohort 2 outcomes (still preliminary; final follow-up is in mid-2026) are tracking ahead of cohort 1 on the metrics we care about. Completion rate is higher (33/38 vs. 11/12 in raw counts but a higher percentage given the larger and less hand-selected cohort). Self-reported framework consistency at the 30-day post-cohort check-in is meaningfully higher. The AI tooling appears to have done what it was designed to do: increase the leverage of the cohort lead and the depth of the student's practice, without inserting itself into the trading decisions.
What we have not built, on purpose
The product surfaces we have explicitly chosen not to build are as load-bearing as the ones we have built. Listed for completeness:
- No live signals room. See the long-form reasoning.
- No always-on Discord. The cohort discussion is bounded; it closes when the cohort closes. Alumni get a separate, small, low-traffic alumni space.
- No "lifetime access" pricing tier. Lifetime tiers attract customers who consume the curriculum once and never engage with the practice — exactly the customer the product is not designed for.
- No upsell to a higher tier with "premium signals." The product is one product. There is no premium tier whose value proposition is the thing we said we would not sell.
- No affiliate commissions paid for student referrals. The customer who learned of Tradoki through an affiliate stack is, in our experience, a different customer from the one who found it through the work. We compensate the second; we do not compensate the first.
- No proprietary indicators sold separately. We use widely-available tools (TradingView, the standard indicator library) and teach how to use them. The competitive advantage we sell is the teaching, not the tools.
Each of these "no" decisions costs revenue in the short term and produces a customer base I am willing to be accountable to in the long term. The trade is intentional.
What we measure
The metrics we track for the company are:
- Cohort completion rate. The percentage of starting students who complete all eight weeks and submit the final paper-trading sample.
- Six-month student outcome rate. The percentage of completers still actively trading within their framework at the six-month follow-up.
- Eighteen-month student outcome rate. The same metric at eighteen months. (Cohort 1 will produce this number in mid-2027.)
- Cohort lead utilisation. The hours per week each cohort lead spends per student. We optimise this down over time as the AI tooling improves.
- Net Promoter at six months. Self-reported, only collected after the student has had six months to evaluate whether the program produced the outcome they wanted.
The metrics we explicitly do not optimise for:
- Subscription renewal rate. The product is not a subscription; the customer's relationship with us ends at the end of the cohort (with optional alumni engagement that is not monetised).
- Time-on-platform. We want students to spend the minimum time on the LMS that produces the outcome — the goal is competence, not engagement.
- Volume of signals consumed. Trick question; we publish none.
A company optimising for renewal rate and engagement looks structurally different from one optimising for student outcomes. We made the choice up front; the choice keeps being right every quarter we check.
— Author's noteWe measure what the customer should care about, not what the dashboard would have us care about. The two are different and the difference compounds.
What comes next
The next cohort (cohort 3, starting March 2026) is sized at 60 students. The eight-week curriculum content is largely stable; the changes between cohorts are operational (AI tooling improvements, cohort lead workflow refinements, peer pairing protocol updates), not pedagogical. The curriculum is detailed in inside the eight-week syllabus.
The cohort model is detailed in the cohort model and why trading students finish. The deliberate-practice arc that students follow during and after the cohort is detailed in the ninety-day deliberate practice plan. The journaling system is detailed in the trading journal and post-mortem template.
We will publish the eighteen-month cohort 1 outcomes in mid-2026, with the standard caveats about sample size and self-reporting. Until then, the case for the product is the same case it was always going to be: a smaller group of students, doing harder work, producing transferable skill. Slower than the alternative. The only path that, in our records and in the industry data we have collected, actually pays.
● FAQ
- When did Tradoki start?
- Late June 2025, with a much smaller scope than the current product. The first cohort ran in September 2025, with twelve students. The current cohort, in early 2026, sits in the low hundreds.
- Why an academy and not a signals room?
- Because the data on consumer signals services and our own observation of student outcomes told us that signals do not transfer skill. We wanted students to leave the program able to trade without us, not subscribed to us.
- What does the AI part actually do?
- It accelerates the work between human decisions — drafting research notes, summarising central-bank prints, generating practice scenarios, code-reviewing student strategies. It does not place trades, size positions, or generate signals. The human always decides.
- Who is Tradoki for?
- Active retail traders who are willing to do the structured work — multi-week curriculum, daily journaling, paper trading before live capital. It is not for traders looking for a signals subscription or a faster path.
- How is success measured?
- By whether students are still trading, profitably and within their risk framework, eighteen months after completion. Renewal is not a metric we optimise for; transferable skill is.
Three more from the log.

The Cohort Model And Why Trading Students Actually Finish
Solo trading courses have low completion rates. The cohort model — fixed start date, fixed peers, fixed end — is how Tradoki gets students to the finish line. Here is the design and the data behind it.
Mar 25, 2026 · 8 min
Inside the Tradoki eight-week syllabus
What the eight weeks of the Tradoki curriculum actually contain — week by week, with the assignments, the live sessions, and what we have changed since cohort one.
Mar 18, 2026 · 10 min
Why Tradoki will never sell signals
Selling signals is the easiest revenue model in trading education. We chose not to. Here is the long-form reasoning, and why I think the choice is the most important one we have made.
Mar 11, 2026 · 10 min