Turn Claude From a Chatbot Into a Thinking Partner đ§
Most people prompt Claude like it's Google. Here's the framework Anthropic actually recommends đ€
đ Hey, Linas here! Welcome to another special issue of my daily newsletter. Each day, I focus on 3 stories that are making a difference in the financial technology space. Coupled with things worth watching & the most important money movements, itâs the only newsletter you need for all things when Finance meets Tech. If youâre reading this for the first time, itâs a brilliant opportunity to join a community of 370k+ FinTech leaders:
Most people still type a sentence into Claude, hit enter, and wonder why the output reads like a Wikipedia article written by a committee. They try again. Add a few more words. Maybe tack on âbe conciseâ or âact like an expert.â The output moves from bad to mediocre.
The problem isnât Claude. Claude is, by most benchmarks, the most capable large language model available today. But capability without direction is wasted potential. The gap between what most people get from Claude and what the model can actually produce comes down to one thing: how you structure your prompts.
Anthropic knows this. Theyâve published extensive documentation on prompt engineering, and their internal teams use a specific framework that consistently produces outputs that donât just answer questions but reason through problems.
Iâve spent months studying Anthropicâs official documentation, testing hundreds of prompts, and distilling what actually moves the needle into a system anyone can use.
Below is the complete framework, followed by 10 ready-to-use prompts that put every principle into practice.
First: Pick the Right Model
Before you write a single word of your prompt, choose the right Claude model. Anthropicâs current lineup is the Claude 4.5 family, and each model is optimized for different work.
Claude Opus 4.6 Extended is Anthropicâs most intelligent model. Complex reasoning, deep analysis, multi-step coding, anything requiring genuine cognitive depth. Itâs slower and more expensive via the API, but when the task demands real thinking, nothing else comes close. Claude Sonnet 4.5 is the balanced workhorse: strong reasoning at a faster speed and lower cost. For most everyday tasks, Sonnet delivers excellent results and covers roughly 80% of use cases. Claude Haiku 4.5 is the speed specialist: fastest, cheapest, ideal for high-volume, straightforward tasks like classification, extraction, and summarization.
The rule of thumb: start with Sonnet. Move to Opus when Sonnetâs output isnât sharp enough. Drop to Haiku when speed matters more than depth.
The Prompt Engineering Framework That Actually Works
Anthropicâs documentation outlines a hierarchy of techniques, ordered by impact. Most people jump straight to the advanced techniques and skip the fundamentals. Thatâs backwards.
Hereâs the framework in the order that matters.
1. Be Clear and Direct
This sounds obvious. It isnât.
The single highest-leverage thing you can do is be specific about what you want. Claude is extremely capable but also literal. Vague prompts get vague outputs.
â Weak prompt:
Write about marketing trends.â Strong prompt:
Analyze the 3 most significant B2B SaaS marketing trends from the past 6 months.
For each trend, explain what's driving it, provide one specific company example,
and assess whether it's likely to accelerate or plateau in the next year.The difference isnât just more words. Itâs more specificity. Youâve told Claude exactly what to analyze, how many items, what to include for each, and the timeframe. Thereâs almost no room for misinterpretation.
Anthropicâs own recommendation: think of your prompt as instructions to a brilliant but literal new hire on their first day. Theyâll do exactly what you say, so make sure you say exactly what you mean.
2. Use XML Tags to Structure Complex Prompts
This is Claudeâs secret weapon, and almost nobody uses it.
Claude was specifically trained to recognize XML tags as structural markers. When your prompt has multiple components, XML tags prevent Claude from mixing them up. Hereâs what that looks like in practice:
<context>
You are helping me prepare for a board meeting next Tuesday.
Our company is a Series B SaaS startup with $8M ARR.
</context>
<instructions>
Draft 3 key talking points about our Q4 growth trajectory.
Each should be 2-3 sentences and backed by a specific metric.
</instructions>
<constraints>
- Tone: confident but not hype-driven
- Avoid jargon the non-technical board members won't know
- Each talking point must acknowledge one risk alongside the opportunity
</constraints>Why this works: Claude sees the tags and immediately understands that <context> is background information (not something to respond to), <instructions> is the actual task, and <constraints> are the guardrails. Without tags, Claude sometimes treats your context as part of the task, or ignores constraints buried in a wall of text.
Tag names are flexible. Thereâs no magic set of âcorrectâ tags. Use whatever makes semantic sense: <background>, <rules>, <examples>, <output_format>.
Consistency matters more than naming convention. You can also nest tags for complex prompts â <task><instructions>...</instructions><data>...</data></task> creates a clear hierarchy that Claude respects.
3. Give Claude Examples (Multishot Prompting)
If one technique consistently separates good outputs from great ones, itâs this: show Claude what good looks like.
Instead of describing the tone, format, or style you want in abstract terms, provide one to three concrete examples. Claude will pattern-match against these far more reliably than it will follow descriptive instructions alone.
<examples>
<example>
Input: "We need to cut 20% of the engineering budget"
Output: "Reducing engineering spend by 20% will require prioritization
across three areas: contractor headcount, infrastructure costs, and
tooling licenses. Here's a phased approach that preserves our two
highest-impact product initiatives..."
</example>
</examples>
Now analyze this situation using the same approach:
"We need to accelerate our product launch by 6 weeks"The example does what paragraphs of instruction cannot: it shows Claude the exact level of specificity, the structure, and the problem-solving style you want.
4. Let Claude Think (Chain of Thought)
For complex problems requiring analysis, multi-step reasoning, math, or coding, telling Claude to work through its reasoning before producing a final answer dramatically improves accuracy.
The simplest version: add âThink through this step by step before giving your final answerâ to your prompt.
The more structured version uses tags to separate reasoning from output:
<instructions>
Evaluate whether we should expand into the European market this year.
Before giving your recommendation, work through the analysis inside
<analysis> tags. Consider market size, regulatory requirements,
competitive landscape, and our current resources.
Then provide your final recommendation.
</instructions>This works because it forces Claude to reason before concluding, rather than pattern-matching to the most likely answer and back-filling justification. Claudeâs latest models also support extended thinking, a built-in capability where the model can do deeper internal reasoning before responding. For the most demanding tasks, this is worth enabling.
5. Provide Context and Background Data
Claude can only work with what you give it. The more relevant context you include, the more tailored the output becomes. Upload documents. Paste data. Provide company background. Share your goals. Explain your audience. Donât make Claude guess what you already know.
<background>
Our company sells project management software to mid-market
construction firms (50-500 employees). Average deal size is $45K/year.
Our main competitors are Procore and Buildertrend. We differentiate
on ease of use and mobile-first design.
</background>
<data>
[paste your Q3 sales data, customer feedback, or whatever's relevant]
</data>
<task>
Based on this context, identify our 3 biggest growth opportunities
for next quarter.
</task>One practical note from Anthropicâs documentation: put your most important instructions and context near the beginning and end of the prompt. For very long prompts, content in the middle can receive slightly less attention.
6. Specify Your Output Format
Donât leave the structure of Claudeâs response to chance. If you want a table, ask for a table. If you want a specific word count, state it.
<output_format>
Respond with:
1. A one-paragraph executive summary (3-4 sentences max)
2. A comparison table with columns: Feature, Us, Competitor A, Competitor B
3. A "Bottom Line" section with your recommendation in 2 sentences
</output_format>This eliminates the most common frustration people have with AI: getting a 2,000-word essay when you wanted a concise brief, or getting bullet points when you needed flowing prose.
7. Define Constraints and Anti-Patterns
Telling Claude what not to do is just as important as telling it what to do. Without constraints, Claude defaults to its training patterns, which often means overly formal, hedge-heavy, and excessively thorough output.
<constraints>
- Do NOT open with "In today's rapidly evolving landscape" or any variant
- Skip the preamble. Start with the most important insight.
- No bullet points â write in prose paragraphs
- If you're unsure about a claim, flag it explicitly rather than hedging everything
- Maximum 500 words
</constraints>This is especially critical for writing tasks. Most people complain about Claude sounding âtoo AI.â The fix isnât a better prompt. Itâs explicit constraints that block the patterns you donât want.
8. Prefill the Response (API and Power Users)
If youâre using Claude through the API, you can provide the beginning of Claudeâs response to steer its output format from the first token. This is called âprefillingâ and itâs remarkably effective.
For example, if you want a JSON response, set the assistant message to start with:
{"analysis":Claude will continue from there, maintaining the format. This eliminates chatty preambles and ensures structured output immediately.
Even in the chat interface, you can simulate this by saying: âBegin your response with...â followed by the exact opening you want.
Bringing It All Together
Hereâs what a properly structured Claude prompt looks like when you combine these techniques:
<context>
I'm the VP of Product at a B2B SaaS company ($12M ARR, 80 employees).
We're planning our 2026 product roadmap and need to decide between
investing in AI features vs. improving our core platform reliability.
</context>
<instructions>
Analyze both investment options. Before giving your recommendation,
think through the trade-offs in <analysis> tags, considering:
- Market demand signals
- Engineering resource requirements
- Competitive pressure
- Impact on retention vs. acquisition
Then provide a clear recommendation with a proposed resource
allocation split.
</instructions>
<constraints>
- Be direct. I want your honest assessment, not a balanced "it depends."
- Use specific examples from B2B SaaS companies that faced similar decisions.
- Flag any assumptions you're making.
- Keep the total response under 600 words.
</constraints>
<output_format>
1. Analysis (in <analysis> tags)
2. Recommendation (2-3 sentences)
3. Proposed allocation: what percentage of engineering resources
to each area, and why
</output_format>This prompt is clear, structured, specific, and constrained. It tells Claude exactly what to do, how to think about it, what to avoid, and how to format the response. The output from this kind of prompt is unrecognizable compared to what youâd get from âHelp me decide between AI features and platform reliability.â
The Difference in Practice
To show the gap, hereâs a common task prompted two ways.
â How most people prompt:
Do a competitive analysis of Notion vs. our product.â What Anthropicâs framework produces:
<our_context>
What we do: [description]
Our strengths: [list]
Our weaknesses: [list]
Strategic question this should answer: [specific decision]
</our_context>
<instructions>
Analyze Notion as a competitor to our product [PRODUCT NAME]
in the [MARKET] space.
Work through these areas:
1. Feature comparison (table format): what they have that we don't,
what we have that they don't, where we're roughly equal
2. Their target customer vs. ours
3. Their pricing logic, not just the numbers
4. What their customers love and hate (from G2, Reddit, Twitter)
5. 3 specific vulnerabilities we could exploit this quarter
Be brutally honest about where they're beating us.
</instructions>
<output_format>
- 1-paragraph executive summary
- Feature comparison table
- 3 recommended tactical moves, ranked by impact
- Flag: [CONFIRMED] vs. [INFERRED] for key claims
</output_format>The first prompt gives you a generic overview you could have found on Google. The second gives you an actionable intelligence brief that could drive a real strategy meeting.
10 Ready-to-Use Mega Prompts (Built on This Framework)
Understanding the principles is one thing. Having them already engineered into prompts you can use immediately is another. Iâve built 10 mega prompts, each structured using every technique above, for the tasks Claude is most commonly used for:
Deep Research - produces an executive brief with sourced findings, expert perspectives, and contrarian views
White Papers - writes with evidence and structure, not fluff and jargon
UI Design - generates production-ready React/Tailwind components with full interaction states
Social Media Content - creates platform-native posts with hooks that stop the scroll every time
Presentations - builds narrative-driven slide decks where every slide earns its place
Long-Form Writing - blogs, newsletters, and YouTube scripts with original thinking and real examples
Learning Plans - week-by-week skill acquisition with projects and checkpoints, not just reading lists
Competitor Analysis - actionable intelligence briefs, not book reports
Stock Analysis - structured investment memos with bull/bear cases and scenario analysis
Full Competitor Strategy Reverse-Engineering - intelligence profiles with vulnerability mapping
Each prompt uses XML tags for structure, includes anti-patterns to block generic AI writing, builds in reasoning steps before output, specifies exact output formats, and has placeholders you fill in with your context.
These arenât templates that produce âpretty goodâ outputs. Theyâre engineered to consistently produce the best output Claude is capable of, because theyâre built on the exact principles Anthropic recommends.
To make Claude even more powerful, Iâm also sharing how I built an AI operating system to run a startup with Claude. It has everything you need to build the one-person unicorn đŠ


