Advanced Prompt Engineering for Claude AI
by @gregeisenberg
ABOUT THIS SKILL
A comprehensive guide distilled from Anthropic's own documentation and best practices, teaching users how to prompt Claude Code and Opus 4.5 to achieve 10x better results by treating the AI as a collaborative teammate rather than a simple tool.
TECHNIQUES
KEY PRINCIPLES (10)
A friendly, clear, and firm tone yields more direct and helpful responses than vague politeness or aggression.
The AI responds to tone cues; overly polite requests can produce chatty, less direct answers, while aggressive prompts trigger de-escalation and cautious responses.
Why: Language models are trained on human interactions and mirror the collaborative or adversarial stance they detect in the prompt.
"treat it like a teammate, right? You would never want to be mean to a teammate, especially if you want to get them to produce"
State requests as clear, action-oriented commands with specific constraints.
Include an action verb, quantity, and target audience in every prompt to eliminate ambiguity.
Why: Specific constraints reduce the model’s search space, leading to focused, high-quality outputs.
"Every highlighted phrase adds a layer of useful constraint. This works extremely well."
A well-defined box produces more creative results than an empty field.
Adding limits on length, style, character, setting, or even forbidden words forces the model into inventive solutions.
Why: Constraints channel the model’s vast generative capacity into novel combinations within boundaries.
"a well-defined box produces a more creative result than an empty field"
Use the AI to generate an outline or rough version first, then refine step-by-step.
Break the task into plan → refine → execute stages instead of attempting a perfect one-shot answer.
Why: Early course-correction prevents compounding errors and yields higher final quality with less total effort.
"working with the AI to create and then refine a plan or outline is a way more reliable path to a high quality result"
Request specific formats (markdown tables, bullet lists, JSON) to make results immediately usable.
Explicitly name the desired structure and fields so the model populates them accurately.
Why: Structured data reduces downstream parsing work and aligns the model’s response with downstream tooling.
"The AI is fluent in many formats beyond prose"
Explaining the ‘why’ behind an instruction helps the AI understand true intent.
Provide brand values, audience definition, and unique selling propositions so outputs reflect strategic goals.
Why: Contextual priming biases the model toward relevance and alignment with user objectives.
"explaining the why behind an instruction helps the AI understand your true intent"
Explicitly command the AI to be more or less verbose to match your needs.
Use phrases like ‘be concise’, ‘explain step-by-step’, or ‘explain like I’m five’ to dial output length.
Why: Token-level control prevents under- or over-generation, saving time and compute.
"you are in control of your output length"
Provide a template or example to guide structure and style.
Give placeholders such as ‘Main thesis: one sentence’ so the model fills predetermined slots.
Why: Scaffolds reduce variance and ensure outputs conform to downstream requirements.
"a little bit of scaffolding goes a long way"
WHAT'S INSIDE
This is a structured knowledge base — not a prompt file. Your AI retrieves principles semantically, understands the reasoning behind each technique, and connects to related skills via a knowledge graph.
Compatible with OpenClaw · Claude · ChatGPT
principles · semantic retrieval · knowledge graph
Free during beta · Sign in to save to dashboard