Generative AI in the Enterprise

One Prompt to Rule Them All? Not Quite

Written by David Russell Published on 8 minutes read
One Prompt to Rule Them All? Not Quite

A few quarters ago, our team faced many quick AI requests. We needed to summarize RFCs, draft unit tests, and create rollout plans. We tried to standardize on one prompting template-and promptly face-planted. For tiny tasks, our heavyweight template slowed everyone down. For gnarly ones, our lightweight template leaked edge cases. The lesson was simple: frameworks aren’t dogma; they’re levers. Choose the one that fits today’s constraints-speed vs. rigor, ambiguity vs. clarity, throwaway text vs. production-grade outputs-and you win. That’s also what the major vendor guides hint at: provide context, be specific, include examples, set format/constraints, and iterate until it’s right. The only real variable is how much ceremony you add for your situation.

They're all based upon the following aspects of the "Perfect Prompt":


Role - “You are a [persona].”

Tell the AI who to act like so it uses the right knowledge and voice.

  • Why it helps: A “math tutor” explains step-by-step; a “senior Python developer” writes clean code.
  • Example: “You are a patient algebra tutor.”

Context - “[Files, errors, user story].”

Give the facts the AI needs before it starts. Think: background info, the problem, and any clues.

  • Why it helps: Without context, the AI guesses and can be wrong.
  • Example: “Here’s my code and the error message: TypeError on line 12.”

Task - “[One verb, one objective].”

Say exactly what you want, using one strong action word.

  • Why it helps: Short and clear tasks lead to focused answers.
  • Good verbs: explain, fix, rewrite, summarize, compare, generate, classify.
  • Example:Fix the bug and explain the fix in one paragraph.”

Examples (optional) - “Input → Output pairs.”

Show a small sample of what a correct answer looks like.

  • Why it helps: The AI can copy the pattern you want.
  • Example:

  • Input: “2x + 3 = 11”

  • Output: “x = 4 (subtract 3 → 8; divide by 2 → 4)”

Format - “Exact structure (code-only / JSON keys).”

Tell the AI exactly how to present the answer so you can copy-paste it.

  • Why it helps: Saves time. No extra fluff.
  • Common formats:

  • Code-only: “Return only Python code in one block.”

  • JSON: “Return { "status": "...", "steps": ["..."] } and nothing else.”
  • Table/Markdown: “Use a 3-column table: Step | Why | Result.”
  • Example: “Output as JSON with keys error_cause, fix_steps, final_code.”

Constraints / Tone - “Edge cases, banned libs, audience & tone.”

Rules the answer must follow (constraints) and how it should sound (tone).

  • Edge cases: Unusual situations that often break things (empty list, huge number).
  • Banned libs: Libraries/tools you don’t want used (e.g., “don’t use itertools”).
  • Audience & tone: Who will read it and how it should feel (friendly, formal, simple).
  • Example: “Handle empty input. Don’t use third-party libraries. Explain like I’m new to Python.”

Guardrails - “Hard boundaries and auto-checks.”

Final safety checks the answer must pass. If it doesn’t, the AI should fix itself before replying.

  • Why it helps: Prevents risky or wrong outputs.
  • Examples of guardrails:

  • Schema check: “If JSON keys are missing, correct them before responding.”

  • Policy check: “Don’t include secrets or API keys.”
  • Test check: “Run the sample input-if the code errors, revise and try again.”
  • Length check: “Keep under 120 words.”

A filled example (putting it all together)

Role: You are a senior Python developer and a patient tutor.
Context: I have a list of numbers. Sometimes it’s empty. My current function crashes when the list is empty.
Task: Fix the function and explain the change in 2 sentences.
Examples (optional):
Input: [] → Output: 0
Input: [1, 2, 3] → Output: 6
Format: Return JSON only:

{ "error_cause": "...", "fix_steps": ["...", "..."], "final_code": "PYTHON_CODE" }

Constraints / Tone: Handle empty lists; no external libraries; explain simply for a beginner.
Guardrails: If the JSON keys are missing or the code fails on [] or [1,2,3], fix and re-answer before returning.

This structure keeps the AI focused, clear, and safe-while staying quick to write.


The five models, from easiest to biggest PiTA

1) RTF - Role · Task · Format

When to use: Speed runs and simple asks. You name the persona, the one thing to do, and exactly how to output it. It’s the smallest checklist that still hits the vendor-proven basics: specificity, context baked into role, and explicit output shape. If quality wobbles, add an example or a constraint line.

Why it works: Vendor docs consistently show that specifying what you want and how to format it is half the game; you can always layer more later.


2) CO-STAR - Context · Objective · Style · Tone · Audience · Response

When to use: Anything stakeholder-facing (docs, emails, briefs) where audience and voice matter. It’s just RTF with knobs for audience, style, and tone. Google’s and OpenAI’s guides emphasize audience-appropriate instruction and clear deliverables-that’s CO-STAR’s sweet spot.

Why it works: It forces you to declare who this is for and how it should read, which reduces rewrites.


3) CRISPE / CRISP - (variants) Context · Request · [Style/Specifics] · Parameters · Examples

When to use: When you need more control or a repeatable pattern. The built-in Examples make few-shot conditioning explicit (hugely helpful for custom schemas), and Parameters capture constraints like length or keys. Microsoft and Anthropic both recommend few-shot examples for reliability.

Why it works: Examples + parameters = fewer “surprises” and more consistent structure.


4) RASCEF - Role · Action · Steps · Context · Examples · Format

When to use: Multi-step or procedural tasks where you want a deterministic workflow (e.g. “do A → validate B → emit JSON”). This is RTF + process + examples. Google explicitly advises iterating and, when needed, changing content order-RASCEF bakes that structure in.

Why it works: Enumerated Steps reduce reasoning ambiguity; Examples lock the output shape. )


5) CRAFTED - Context · Role · Action · Format · Tone · Examples · Definition of Done

When to use: Engineering-grade work where acceptance criteria and edge-case handling matter (e.g. code changes, migrations, policy reviews). “Definition of Done” is straight from Agile-explicit quality gates-applied to prompts. It’s the most setup, but it pays off when failure is expensive.

Why it works: Recency-sensitive models tend to honor constraints stated last; putting DoD at the end helps compliance. Microsoft also notes that prompt order can affect outputs-a practical nudge to put the rules where they’ll stick.


The unifying idea: knobs, not creeds

Strip away the letters and every framework asks you to tune the same six knobs:

  1. Context - what the model should know
  2. Role - who it should channel
  3. Task - the one thing to do
  4. Examples - what “good” looks like
  5. Format - the exact output shape (json/html/table)
  6. Constraints / Tone - rules, audience, and style

OpenAI, Anthropic, Google, and Microsoft all echo these same fundamentals in their suggested best practices. Each framework just bundles them a little differently. Each fits quite well into the GTM AI flywheel. Pick fewer knobs for speed (RTF), more for reliability (CRISPE / RASCEF), most for production quality (CRAFTED).


How to choose-based on constraints

  • Tiny task, tight timeRTF Minimal ceremony; you can add examples later if the first pass wobbles.
  • Stakeholder-sensitive commsCO-STAR Audience, style, and tone prevent painful rewrites.
  • Repeatable structureCRISPE Examples + parameters stabilize output across runs.
  • Procedural / multi-stepRASCEF Spell out steps to reduce reasoning drift.
  • High-risk / engineeringCRAFTED Close with a DoD so the model hits your acceptance criteria.

Pro tip: If a prompt underperforms, don’t abandon the task-turn one more knob: add a few-shot example, strengthen the format spec, or append a short DoD. This is the iteration loop every vendor teaches.


A 30-second template you can adapt to any model

  • Role: You are a [persona]
  • Context: [Files, errors, user story]
  • Task: [One verb, one objective]
  • Examples (optional): Input → Output pairs
  • Format: Exact structure (code-only / JSON keys)
  • Constraints / Tone: Edge cases, banned libs, audience & tone
  • Guardrails: Hard boundaries and auto-checks

This is the same skeleton behind every acronym-and the one the official guides keep pointing you back to.

Turning Conversations Into Coaching: The Future of Personal Development
Generative AI in the Enterprise 3 min read

Turning Conversations Into Coaching: The Future of Personal Development

If you really want to understand how a company works, don’t look at the org chart - look at how people talk. Every meeting, every misfire, every “wait, that’s…

By David Russell Published on Oct 30, 2025 3 minutes
Conversations to Conversions 3 - Smaller apps bigger lift
Generative AI in the Enterprise 3 min read

Conversations to Conversions 3 - Smaller apps bigger lift

We shipped work that turns conversations into durable outcomes faster, cheaper, and with clearer audit trails. The headline this week is simple - smaller…

By David Russell Published on Oct 13, 2025 3 minutes