Writing Pine Script with Claude and GPT without burning your account
AI assistants are good at producing Pine Script that compiles. They are bad at producing Pine Script that does what you meant. Here is the workflow we use to keep that gap from costing real money.

The same trader who would never let a junior analyst push code straight to production will happily paste an AI-generated Pine Script into TradingView, hit "Add to chart," and proceed to size live capital based on the resulting equity curve.
The same trader who would never let a junior analyst push code straight to production will happily paste an AI-generated Pine Script into TradingView, hit "Add to chart," and proceed to size live capital based on the resulting equity curve. The strategy tester shows a clean upward line. The Sharpe is implausibly good. The trader concludes the AI cracked it. Forty-five sessions later, the live account is bleeding and nobody understands why. AI assistants are excellent at producing Pine Script that compiles, and unreliable at producing Pine Script that does what you meant — and the difference between those two outcomes is where most retail accounts have been quietly killed in the last twelve months.
This is the AI-and-Pine workflow we use inside the Tradoki desk. It is opinionated, conservative, and built specifically to catch the bugs that AI introduces and that human reviewers miss because the script "looks right." None of it is a recommendation to live-trade any specific strategy. It is the discipline that separates "AI helped me prototype an idea" from "AI helped me lose money quickly."
What AI is actually good at in Pine
Both Claude and GPT, as of early 2026, are genuinely useful for several Pine-Script tasks:
- Translating an English description into a syntactically valid v6 indicator. Standard moving averages, oscillators, custom plots, table objects, alert conditions — both models produce working code on the first or second pass for routine cases.
- Refactoring a legacy v4 or v5 script into v6. The version-migration patterns are well-represented in their training data. Both will catch the easy version-specific footguns (changes to
request.security,barstatesemantics,strategy.entryparameter ordering). - Explaining what an existing script does. Asking either model to walk through a Pine script line-by-line is one of the fastest ways to audit code you did not write.
- Generating boilerplate. Strategy headers, input declarations, plot styling, alert templates. The kind of work that is mechanical and tedious for a human and instant for a model.
For these uses, AI saves real time. The desk uses both Claude and GPT daily for exactly these tasks.
Where AI is dangerous
The dangerous failure modes are not in the syntax. They are in the semantics of the strategy, and the model has no way to know whether the semantics it produces match what you actually meant.
The recurring offenders, in roughly the order of how often they hit our reviews:
1. Look-ahead bias. The script reads a value that would not have been available at the bar's close — most commonly by referencing close instead of close[1] inside an entry condition that the model intended to evaluate against the previous bar. The backtest looks excellent because the strategy is effectively trading on tomorrow's price. The live script, with no future to look at, performs nothing like the backtest.
2. Repainting indicators. A script that uses request.security with non-standard arguments, or that calculates conditions based on the developing bar rather than confirmed closes, will produce different signals on a live bar than it does on a historical one. The strategy tester scores the historical signals; live trading executes the developing-bar signals; the equity curves diverge.
3. Strategy-tester defaults that bear no resemblance to your account. The default initial capital, the default commission, the default slippage, and the default order size assumptions in TradingView are not your broker's. AI-generated strategies routinely ship with the defaults intact, producing returns that rely on a free, frictionless market.
4. Implicit timeframe coupling. A script written for the 1-hour chart is silently assuming things about session timing, ATR scale, and indicator length that do not transfer to the 5-minute or the daily. AI will happily generate a script "for any timeframe" that is in fact tuned for one and dangerous on the others.
5. Warm-up handling. Indicators that need N bars of history to be valid are sometimes used by AI-generated entries before they are warm. The first N entries on a fresh chart use undefined values, and the strategy tester silently ignores or mishandles them.
The brief: write the prompt like a spec, not a wish
The worst AI Pine outputs we see are generated from one-line prompts. "Write me a Pine v6 strategy that buys when RSI crosses above 30 and sells when it crosses below 70" is under-specified in approximately twelve ways the model has to guess at, and it will guess wrong on at least one.
The brief we teach inside the curriculum has six parts. We will not paste a complete one here because the spec varies per strategy, but the structure is:
- Pine version explicitly. "Pine v6, no v5 fallbacks." Pin it.
- Asset class and timeframe. "EUR/USD, 1-hour chart." This constrains warm-up assumptions.
- Exact entry condition with bar reference. Not "when RSI crosses above 30." Instead: "when the close-of-bar RSI(14) value is greater than 30 and the close-of-bar RSI(14) value of the previous bar was less than or equal to 30." Spell out which bar each value is read on.
- Exit conditions, identical level of specification. Stops, targets, time-based exits — each with the bar reference and the price reference (high, low, close, intra-bar fill).
- Strategy-tester configuration. Initial capital, commission per trade, slippage in ticks, order quantity rule. Do not let the model use defaults.
- What the strategy must not do. "Must not look ahead. Must not repaint. Must not enter on a bar still developing." Banned behaviours, written down.
A six-part brief is more work to write than a one-line prompt. It is also significantly less work than reviewing the script the one-line prompt produced.
Pine v6 strategy.
Instrument: EUR/USD spot, retail broker.
Timeframe: 1-hour, RTH session only.
Entry long: prior bar's close-of-bar RSI(14) <= 30 AND current bar's close-of-bar RSI(14) > 30.
Exit: 2× ATR(14) stop on bar close past entry, or 30 bars elapsed.
Strategy tester: $10,000 initial, $5/trade commission, 1 tick slippage, fixed $1,000 risk per trade.
Constraints: no look-ahead, no repaint, do not act on the developing bar, must compile cleanly, no v5 patterns.The static review pass: what to check before running the strategy tester
Before the strategy tester ever runs, the script gets a static review against a checklist. We keep this short because long checklists do not get used; ours fits on a single screen:
- Bar references. Every value used in an entry or exit condition is read with an explicit
[1](or higher) where the intent is "the prior bar." - No
request.securitywithlookahead=barmerge.lookahead_on. Or if it is there, the trade is on a confirmed bar of the higher timeframe. - Conditions evaluate on confirmed bars. Either the script is wrapped in a
barstate.isconfirmedcheck, or every entry usesstrategy.entryrather than logic that fires on the developing bar. - Strategy-tester settings match the brief. Initial capital, commission, slippage, order quantity — read off the strategy panel, not assumed.
- Warm-up. The script either skips the first N bars (where N is the longest indicator length) or initialises the indicators to known values.
- Per-trade risk is calculated, not hard-coded. A
risk_per_trade = capital * 0.005calculation is more honest thanqty = 1000when comparing across instruments and timeframes.
Approximately every AI-generated draft we have ever reviewed has failed at least one item on this list. Most fail two or three.
The strategy-tester setup that does not lie to you
Once the static review passes, the strategy tester gets configured to lie as little as possible. Defaults are off; realism is on:
- Commission. Match your broker's actual per-trade or per-contract cost. Round up.
- Slippage. A ticks-based slippage greater than zero. For most retail forex and futures, 1–3 ticks is a defensible starting point; for less liquid markets, more.
- Initial capital. Match the size of the account you would actually run this on. The strategy tester's behaviour scales nonlinearly with capital because of position-size rounding.
- Order quantity. Fixed risk in dollars, not fixed quantity. A strategy that risks $500 per trade across $10,000 capital is a different beast from one that risks 1 contract per trade.
- Pyramiding off, unless explicitly intended. TradingView allows multiple entries per signal by default in some configurations; that is rarely what was intended and is rarely caught.
Even with realistic settings, the strategy tester is still optimistic relative to live trading. The fills are at the historical bar's price; the live fills are at whatever your broker's order routing produces. The slippage model is a constant; the live slippage is a distribution. The latency is zero; the live latency is whatever your network and your broker's infrastructure compose to.
The paper trading window: the bug-finding stage
After the strategy tester says yes, paper trading begins. This is the stage that catches the bugs the strategy tester hid.
We require a minimum of 60 sessions of paper trading on a strategy before live capital is even considered. The number is not arbitrary; it is the smallest sample we have found that surfaces the typical look-ahead and repaint bugs in real-time conditions. Strategies that look perfect in the strategy tester regularly degrade meaningfully in paper trading because the live signal is being computed on developing bars or with broker-side data that differs subtly from TradingView's historical feed.
The paper-trading journal entry is non-negotiable. Every entry, every exit, every signal that did not produce an entry, every divergence between the live signal and what the strategy tester would have produced on the same bar — recorded. We use the trading journal template for exactly this.
If the paper trading equity curve materially diverges from the strategy tester equity curve over the same window, the strategy goes back to the static review. There is a bug somewhere in the brief, the script, or the tester configuration. The strategy is not yet ready for live capital.
— The Tradoki desk noteThe strategy tester answers the question, "is the code internally consistent?" Paper trading answers the question, "is the code consistent with reality?" Live trading answers the question, "is reality the way you assumed it was?" Each question kills more strategies than the last.
Where AI fits in the workflow afterwards
Once a strategy has survived the review pass, the strategy tester, and the paper trading window, AI moves back into a useful role: documenting it. We use Claude routinely to:
- Generate a plain-English description of what the strategy does, suitable for the journal.
- Produce alert message templates that include the relevant context (instrument, timeframe, signal type, position size).
- Refactor sections that have become unreadable as the strategy evolved.
- Translate the strategy into pseudocode that can be reviewed by a non-Pine reader.
These uses are safe because the model is not making strategy decisions. It is rendering a strategy that has already passed the human review.
For the broader picture of what AI can and cannot do in trading workflows, see what AI can and cannot do in trading in 2026, and for the reasons we are skeptical of AI live-trading systems specifically, see why AI live trading bots blow up. For the underlying risk math that makes a "small" live size genuinely small, see risk of ruin and position sizing.
● FAQ
- Can Claude or GPT actually write Pine Script?
- Both write syntactically valid Pine Script for common patterns. Both occasionally invent functions, get version semantics wrong, and silently misread your intent on edge cases. Treat their output as a draft, not a deliverable.
- Which AI is better at Pine Script in 2026?
- Claude has been more reliable in our testing for following Pine v6 semantics and for explaining its reasoning. GPT has been faster at producing first drafts. Both miss the same kinds of strategy-logic bugs — the model choice does not save you from the workflow.
- What is the most dangerous bug AI introduces?
- Look-ahead bias — using future-bar values in the entry calculation. The script backtests beautifully and falls apart live. AI models will introduce this without warning if your prompt is ambiguous about which bar a value is read on.
- Should you live-trade a Pine script written by AI?
- Only after a structured review pass that catches look-ahead, repaint, broker-fee assumptions, slippage modelling, and warm-up bars — and only after a paper trading window long enough to expose the bugs the backtest hid.
- What is the AI workflow you actually use?
- Brief in plain English, generate a strict v6 draft, run our static review checklist, run the script in TradingView's strategy tester with realistic costs, then paper trade. Live capital comes after, not during, the review.
Three more from the log.

Why AI live-trading bots blow up — and what actually works
AI live-trading bots blow up for the same reason they look attractive: they remove the human checkpoint that survives regime change. Here is what we have observed, and the narrower set of AI uses that hold up.
Feb 18, 2026 · 9 min
LLMs As Research Assistants, Not Traders
The right way to use a frontier model in trading work is as a fast, well-read junior analyst — not as a principal making calls. The framing changes which prompts you write, which outputs you trust, and where the model's value actually lives.
Feb 04, 2026 · 9 min
The AI signals economy is a scam, and the data shows it
The 'AI trading signals' market sells subscriptions to systems that have no live edge. The data we have collected over twelve months is unflattering enough that I will say it directly.
Jan 21, 2026 · 7 min