Take a screenshot of any stock chart on your Mac, and an AI agent instantly analyzes the candlestick patterns, fetches live market data, and tells you what it means. No manual research. No copy-pasting tickers. Just point, shoot, and understand.
| Attribute | Details |
|---|---|
| Difficulty | ⭐⭐ (Level 2 - Intermediate) |
| Who's it for | Traders, investors, financial analysts, automation enthusiasts |
| Problem solved | Manual chart analysis is slow and error-prone. This automates visual chart interpretation. |
| Original template | n8n.io/workflows/2970 |
| Tools used | n8n, OpenRouter, MarketStack, SerpAPI, macOS Shortcuts |
| Setup time | ~15 minutes |
| Time saved | 5-10 minutes per chart analysis |
The Problem With Staring at Charts
David once spent forty-five minutes analyzing a single candlestick chart, convinced he'd spotted a rare "inverted hammer forming a cup-and-handle breakout pattern." Turned out it was just Tesla doing Tesla things. The chart was upside down on his second monitor.
Here's what most people do when they see an interesting stock chart: screenshot it, open a new tab, search for the ticker, read three conflicting articles, forget what pattern they originally noticed, and eventually give up to check what's for lunch.
This workflow eliminates all of that. Take a screenshot. Get analysis. Move on with your life.
What This Workflow Does
The magic happens in four connected stages. Your Mac captures a screenshot and sends it as Base64 data to an n8n webhook. The workflow converts that image into a format the AI can process. An AI agent powered by GPT-4o-mini through OpenRouter examines the chart, identifies patterns, and determines what it's looking at. Then it pulls in real context from MarketStack for current prices and SerpAPI for relevant news. Finally, it sends back a clean, formatted analysis.
The AI doesn't just describe what it sees. It interprets. It tells you whether that pattern typically signals a reversal or continuation. It mentions if there's news that might explain the movement. It gives you the context you'd normally spend ten minutes gathering.
Quick Start Guide
You'll need accounts with four services before you start: OpenRouter for AI model access, MarketStack for stock data, SerpAPI for news search, and obviously n8n for the workflow itself. All of them offer free tiers that work fine for personal use.
Import the workflow into n8n and configure your credentials. The webhook URL it generates becomes the endpoint your Mac will call. Copy that URL carefully—you'll need it for the shortcut.
The macOS Shortcut is the interface. Download it from the original template page, double-click to install, and configure it with your webhook URL and optional API key for authentication. Once installed, you can trigger it from the menu bar, keyboard shortcut, or right-click menu. Capture a chart, and the analysis appears seconds later.
Building the Pipeline
The Webhook node sits at the entrance, listening for incoming requests. It receives the Base64-encoded image data from your Mac and passes it downstream. Authentication is optional but recommended—use a simple API key header if you're exposing this to the internet.
The Convert to File node transforms that Base64 string into actual binary image data. This step is essential because the AI agent expects a proper image file, not encoded text. Without this conversion, the model would receive gibberish and respond accordingly.
For Advanced Readers: The Base64 payload arrives in the request body. The Convert to File node uses the "From Base64 String" option with the data field mapped to the incoming JSON property. Output format should be set to binary with an appropriate MIME type like image/png.
The AI Agent node is where reasoning happens. It connects to OpenRouter's GPT-4o-mini model, which handles vision tasks efficiently. The agent has access to two tools: MarketStack for fetching real-time stock prices and historical data, and SerpAPI for searching recent news about identified tickers.
Model selection matters here. You must choose a vision-capable model. GPT-4o-mini works well and is cost-effective. Claude 3 Haiku is another option through OpenRouter. Using a text-only model will throw errors because it can't process the image input.
Window buffer memory gives the agent context within a session. If you're analyzing multiple charts in sequence, it remembers the previous ones and can compare patterns or note contradictions. This is optional for single-shot analysis but helpful for extended research sessions.
For Advanced Readers: The agent's system prompt should instruct it to: (1) identify the asset type and ticker if visible, (2) describe the chart timeframe, (3) identify candlestick patterns, (4) note support/resistance levels, (5) use the tools to gather context, and (6) provide a synthesis. Structure the output with clear sections for pattern analysis, market data, news summary, and overall interpretation.
The Markdown node converts the AI's response from Markdown to HTML. This makes the output render cleanly in most interfaces. The final Respond to Webhook node sends everything back to your Mac, where the Shortcut displays it as a notification or saves it to Notes.
The Tools That Do the Heavy Lifting
MarketStack provides real-time and historical stock data through a simple REST API. When the AI identifies a ticker in the chart, it can query current price, daily change, and recent history. This grounds the visual analysis in actual numbers. The free tier offers 100 requests per month—plenty for personal use.
SerpAPI scrapes Google results programmatically. When the agent finds a ticker, it searches for recent news and analyst opinions. This catches earnings announcements, regulatory filings, or market-moving events that might explain the patterns in the chart. The free tier gives you 100 searches monthly.
OpenRouter acts as a gateway to multiple AI providers. Instead of managing separate API keys for OpenAI, Anthropic, and others, you get one interface to all of them. Pricing is transparent and often competitive. For image analysis, their access to GPT-4o-mini or Claude 3 models gives you professional-grade vision capabilities.
Key Learnings
Vision models change what automation can understand. A year ago, processing an image meant OCR or basic object detection. Now you can send a screenshot and get nuanced financial analysis. This opens up entirely new categories of workflows—anything visual becomes automatable.
Tool use makes agents dramatically more capable. The AI could analyze the image alone, but without MarketStack and SerpAPI, it would be guessing about current prices and recent news. Giving it access to real data transforms speculation into informed analysis.
Platform shortcuts are underrated triggers. macOS Shortcuts, iOS Shortcuts, Windows Power Automate—these native automation tools make excellent frontends for n8n workflows. They're fast to trigger, easy to customize, and don't require building a separate interface.
What's Next
You've got a working chart analyzer. Now make it yours. Add a crypto API for non-stock charts. Pipe the output to Notion for a research log. Set up a Discord bot that accepts chart images and responds with analysis.
The pattern here—screenshot, interpret, enrich, respond—applies to far more than stocks. Medical images, architecture diagrams, handwritten notes. Vision AI plus tool access plus n8n orchestration equals a lot of previously-impossible automations.
Ship it today. See a chart. Screenshot it. Let the machine tell you what it means. Then maybe teach David which way candlesticks are supposed to point.
