The Creative Strategy Claude Skill Guide: Build an AI-Powered Ad Research System

By PrashantBhatkal ยท March 26, 2026 ยท 8 min read

The Creative Strategy Claude Skill Guide: Build an AI-Powered Ad Research System

The average creative strategist works across six tabs: the Meta Ad Library, TikTok Creative Center, a Google Doc with a swipe file, a spreadsheet of angles, a brand voice doc, and a Notion page full of half-written briefs. The output of all that is usually one decent hook and a lot of second-guessing. This is a solved problem.

Claude skills change the structure of that work. Instead of moving between tools manually, you build a system where each agent handles one stage of the research-to-script pipeline, passing its output to the next. The creative strategist stops being the connector between tools and starts being the editor of a process that runs without them.

This guide walks through a five-agent creative strategy system, what each agent does, which tools it connects to, and how to sequence them from raw competitor data to a finished first draft.

What Claude Skills Are (and Why They Matter for Creative Work)

A Claude skill is a reusable, prompted agent that can access external tools via MCP (Model Context Protocol). You write a skill once, give it a specific job, and point it at the tools it needs. From then on, triggering that skill produces consistent, structured output without you rebuilding the prompt each time.

For creative strategy, the relevant tools are ad libraries, review scrapers, attribution platforms, and brand reference docs. MCPs are the connectors that let Claude read from those sources directly, rather than relying on copy-pasted inputs. The result is a research pipeline that starts from live data instead of whatever you happened to save last week.

The Spreshapp MCP setup guide covers how to connect your saved ad library to Claude in one terminal command. Once connected, Claude can query your folders, search by brand or angle, and surface patterns across platforms without you opening a single tab.

The System at a Glance

The full creative strategy skill tree has five agents, organized by pipeline stage. The first two are data-grounded and require MCP connections. The last two are pure AI and need no external tools beyond your own reference files.

creative-strategy/ skill tree

โ–ถ ๐Ÿ“‚ creative-strategy/
โ–ถ ๐Ÿ“„ research-agent.md competitor ads, reviews, reddit, tiktok
โ–ถ ๐Ÿ“‚ skills/
โ”œโ”€ ๐Ÿ“„ competitor-scan.md what ads are working right now?
โ””โ”€ ๐Ÿ“„ review-mining.md what language do customers use?
โ–ถ ๐Ÿ“‚ mcps/
โ”œโ”€ ๐Ÿ”Œ spreshapp-mcp competitor ad library
โ””โ”€ ๐Ÿ”Œ firecrawl scrapes reviews, reddit, tiktok comments
โ–ถ ๐Ÿ“„ persona-builder.md psychographics from real data
โ–ถ ๐Ÿ“‚ mcps/
โ”œโ”€ ๐Ÿ”Œ meta-api audience data, ad performance
โ”œโ”€ ๐Ÿ”Œ shopify-api customer + order data
โ””โ”€ ๐Ÿ”Œ triple-whale-api attribution + cohort data
โ–ถ ๐Ÿ“„ idea-generator.md ideas per angle, scored + ranked
โ–ถ ๐Ÿ“‚ skills/
โ””โ”€ ๐Ÿ“„ ad-grading.md scores against proven patterns
โ”œโ”€ ๐Ÿ“„ hook-writer.md Disrupt x Qualify x Gap formula pure AI
โ–ถ ๐Ÿ“„ script-drafter.md first drafts in brand voice pure AI
โ–ถ ๐Ÿ“‚ references/
โ”œโ”€ ๐Ÿ“„ brand-voice.md tone, language, do's and don'ts
โ””โ”€ ๐Ÿ“„ past-winners.md hooks and angles that performed

The tree reads top to bottom as a pipeline: research feeds persona-building, persona-building shapes idea generation, ideas become hooks, hooks become scripts. Each agent is independent enough to run on its own, but they compound when run in sequence.

Node 1: research-agent.md

The research agent is the entry point for the whole system. Its job is to answer two questions: what ads are working right now, and what language do customers actually use when talking about the problem your product solves.

It does this through two sub-skills. The competitor-scan skill connects to Spreshapp MCP and queries your saved ad library, pulling the most recent competitor ads by brand, angle type, or platform. Because Spreshapp aggregates ads from Facebook, TikTok, and Google/YouTube, the competitor scan returns cross-platform data in a single query rather than requiring three separate research sessions. You can ask it things like "show me the longest-running hooks from [competitor] in the last 60 days" and get structured output you can pass to the next agent.

The review-mining skill connects to Firecrawl to scrape Amazon reviews, Reddit threads, and TikTok comment sections for the vocabulary real customers use. This matters because ad copy that converts uses customer language, not brand language. "My lower back finally stopped hurting after three weeks" is a customer line. "Ergonomically optimized lumbar support" is a brand line. The first one is a hook. The second is a spec sheet.

The output of the research agent is a structured brief: the top performing ad angles in the category right now, with the hook formats that have survived longest, plus a bank of customer phrases organized by pain point and outcome. That brief is the input for the next two agents.

Node 2: persona-builder.md

Most persona work is guesswork dressed up as research. Demographic buckets from a shared Notion doc, psychographics invented in a workshop, a fictional name and a stock photo. None of it comes from actual purchase or engagement data, which means none of it is reliable for brief writing.

The persona builder connects to three MCPs that hold real behavior data: Meta API for audience composition and ad engagement by segment, Shopify API for purchase history and customer order patterns, and Triple Whale for attribution and cohort analysis. From these sources it synthesizes a data-grounded persona: which segments are converting, what they bought first, how long before they bought again, and which ad angles drove the initial conversion.

The output is not a demographic sheet. It is a set of angles that have evidence behind them: this segment responds to outcome-led copy, this cohort re-engages on social proof, first-time buyers from TikTok need a different trust arc than returning buyers from Facebook. That kind of persona actually changes what you write.

Node 3: idea-generator.md

The idea generator takes the research brief and the persona profile and produces a ranked list of ad ideas, grouped by angle. It is not a brainstorm dump. Each idea is scored against proven patterns before it surfaces.

The scoring happens through the ad-grading sub-skill, which evaluates each idea against criteria extracted from high-performing ads: is the hook disruptive within the first three seconds, does it qualify the right audience, does it open a curiosity gap that the body copy closes. Ideas that score below the threshold get filtered out. What remains is a shortlist of high-confidence angles, ordered by strength.

The practical effect is that you stop pitching ten ideas and hoping two survive. You arrive at a brief with five ideas that have already passed a filter, which changes how the client conversation goes and how quickly creative testing gets started. The TikTok ad creative analysis shows the pattern-level criteria that the ad-grading skill draws from, specifically the hook structures and video lengths that consistently drive performance.

Node 4: hook-writer.md

Hook writing is a pure AI skill. No MCPs, no external data. The inputs are the top ideas from the idea generator and the customer language from the research agent. The output is three to five hook variations per idea, written to a specific formula.

The formula is Disrupt x Qualify x Gap. Disrupt means the first line stops the scroll by breaking the expected visual or verbal pattern. Qualify means the second line signals exactly who this is for, filtering out everyone who is not the target. Gap means the setup creates a tension the viewer needs to resolve, which keeps them watching.

Running this as a skill rather than a freeform prompt produces consistent output because the formula is encoded in the skill file itself. You are not re-explaining the framework every session. The skill knows the framework and applies it to whatever ideas you feed in. The variation comes from the ideas and the customer language, not from the structural prompt shifting between sessions.

Node 5: script-drafter.md

The script drafter takes the highest-scoring hooks and writes full first-draft scripts. Like the hook writer, it is pure AI with no MCP dependencies. Its reference files do the grounding.

The brand-voice.md reference defines the tone, prohibited phrases, sentence rhythm, and energy level of the brand. Without this file, Claude defaults to a generic neutral register that rarely matches how a brand actually sounds. With it, the scripts come out in the right voice without manual editing at the end.

The past-winners.md reference contains hooks and full scripts from ads that have performed, with notes on what made each one work. This is the compressed learning from the swipe file, distilled into examples Claude can pattern-match against. A well-structured swipe file makes this reference file genuinely useful. A folder of unsorted screenshots does not.

The output of the script drafter is a first draft, not a final script. The strategist still edits. But starting from a draft that is already in brand voice, built on a proven hook, and structured around a graded angle is a different starting point than a blank page.

How to Start: Connect Spreshapp MCP First

The highest-leverage first step in this system is connecting the research agent to live competitor data. Without real ad data, the research brief is based on whatever you can manually recall or paste, which reintroduces the tab-switching problem the system is designed to eliminate.

Spreshapp MCP connects your saved ad library to Claude in one terminal command. Once it is running, the competitor-scan skill can query your folders directly: which brands are running the longest, which hook formats are showing up most frequently, what angles are new in the last two weeks. The research agent goes from a manual synthesis task to a structured query that returns in seconds.

The full MCP installation guide walks through the setup. It takes about five minutes and works with Claude Desktop out of the box.

The System Is Modular

You do not have to run all five agents to get value from the system. The research agent and the hook writer are usable on their own. So is the script drafter, if you already have angles you believe in.

The full pipeline from research-agent through script-drafter produces a complete creative package in a single work session: competitive context, persona grounding, ranked ideas, hook variations, and first-draft scripts. That is what used to take three days of scattered research spread across six tabs. The system does not eliminate the strategist. It removes the logistical overhead so the strategist can focus on the one thing that actually requires judgment: deciding which ideas are worth testing.

Connect your ad library to Claude in one command

Spreshapp MCP lets Claude search your saved ads, list folders, and surface winning patterns using natural language. Add it as a skill data source and your research agent starts with real competitor data.