Designing products for the long run — on screens and on trails

From Words to Wonders, the hidden craft of designing prompts that make AI think smarter

Source: Google Images

Prompt Engineering — Recall Notes

  1. Prompt = The input or instruction you give a generative model to guide its output.
  2. A good prompt typically includes: instruction, context, input data, and output indicators.
  3. Prompt engineering = Designing effective prompts to get optimal, logical, and relevant responses.
  4. Refinement = Iterating and experimenting with wording, structure, and detail.
  5. Why it matters: Improves efficiency, performance, reliability, and safety of model outputs.
  6. Effective prompts help control style, tone, and content.
  7. Best practices = Clarity, Context, Precision, and Role-play.
  8. Tools help with suggestions, iterative refinement, bias reduction, domain-specific support, and prompt libraries.
  9. Examples of tools: IBM Watsonx Prompt Lab, Spellbook, Dust, PromptPerfect.

Prompting Techniques & Concepts

Ways to Improve Prompt Reliability & Quality

  • Task specification → clear instructions.
  • Contextual guidance → add background info.
  • Domain expertise → integrate subject knowledge.
  • Bias mitigation → reduce unfair outputs.
  • Framing → phrase prompts effectively.
  • User feedback loop → refine iteratively.
  • Zero-shot prompting → Model responds meaningfully without examples.
  • Few-shot prompting → Provide demonstrations/examples in prompt → better in-context learning.

Benefits of effective prompting

  • Improves explainability.
  • Addresses ethical concerns.
  • Builds user trust.
  • Interview pattern → More dynamic & iterative than conventional prompting.
  • Chain-of-Thought (CoT) → Step-by-step reasoning for clarity & stronger cognitive output.
  • Tree-of-Thought (ToT) → Builds on CoT; branches reasoning like a tree → structured, diverse exploration.

Prompt Engineering & Generative AI — Key Terms

  • Prompt → Instruction or question given to AI to generate content.
  • Prompt Engineering → Designing prompts to get better, more accurate outputs.

Task Techniques:

  • Zero-shot prompting → Model responds without examples.
  • Few-shot prompting → Demonstrations/examples in prompt to guide output.
  • Naive prompting → Simplest direct queries.
  • Framing → Guide output within specific boundaries.
  • Contextual guidance → Add background/context to improve relevance.
  • Domain expertise → Use field-specific terminology (medicine, law, etc.).
  • User feedback loop → Iteratively refine prompts with user input.

Advanced Methods:

  • Chain-of-Thought (CoT) → Step-by-step reasoning.
  • Tree-of-Thought (ToT) → Hierarchical reasoning, branching prompts.
  • Interview pattern → Simulate a conversation or Q&A style.
  • Role-play / Persona → Prompt from a character or persona perspective.
  • Self-reflection prompting → Model critiques its own outputs.
  • Playoff method → Compare multiple outputs to select the best.
  • Comparison prompting → Evaluate outputs side-by-side.

AI & Models:

  • Generative AI → AI that creates new content (text, images, audio, video).
  • Generative AI models / LLMs → Understand input context to generate outputs.
  • Large Language Models (LLMs) → Deep learning models trained on massive text.
  • Generative pre-trained transformers (GPT) → Transformer-based AI producing human-like text.
  • Multi-modal models → Process & generate multiple data types (text, image, audio).
  • Cross-modal understanding → AI connecting reasoning across data types.
  • Text-to-Image Models:
  • DALL-E, Midjourney, Stable Diffusion → Generate images from text.

Tools & Platforms:

  • OpenAI Playground, LangChain, Prompt Lab, Dust, PromptPerfect, PromptBase, IBM watsonx.ai → Experiment, chain, or optimise prompts.

Supporting Concepts:

  • API Integration → Connect software systems via APIs.
  • Input Data / Output Indicator → Information given to prompt; benchmarks for output.
  • Explainability → Degree to which AI decisions are understandable.
  • Bias Mitigation → Prompts guide neutral outputs.
  • Companies / Tech Reference:
  • Claude, Scale AI, StableLM → AI tools, data labelling, or open-source models.

Prompting Methods — Usage, Pros & Cons

  1. Playoff Method
  • Usage: Compare multiple prompts/responses, pick the best.
  • Pros: Structured, systematic, helps select the optimal.
  • Cons: Time-consuming, subjective human judgment.

2. Interview Method

  • Usage: Ask clarifying questions to refine the response.
  • Pros: Adds context, improves accuracy.
  • Cons: Slower, less efficient for quick tasks.

3. Chain of Thought (CoT) Method

  • Usage: Step-by-step logical reasoning.
  • Pros: Improves clarity, transparency, and logical flow.
  • Cons: Redundant for simple tasks, slower.

4. Tree of Thought (ToT) Method

  • Usage: Explore multiple solution pathways from one idea.
  • Pros: Great for brainstorming, diverse solutions.
  • Cons: Risk of overload, harder to manage complexity.

Prompt Hacks — Quick Recall Guide

Definition

  • Techniques to manipulate prompts to guide LLMs / image models for desired outputs.
  • Aim: improve quality, enable new tasks, and make AI more user-friendly.

Benefits

  • Better quality & accuracy — fewer errors with tailored prompts.
  • Unlocks new tasks — combine with code/images.
  • Accessibility — easier, more effective AI use.

Prompt Hacks for Text Generation

  • Use modifiers — control style/tone (e.g., humorous, formal, rap style).
  • Add context & examples — detailed inputs improve relevance.
  • Combine inputs — prompt + code/image = richer output.

Example

  • Prompt: “Poem about a crow” → simple poem.
  • Prompt + modifier: “Poem about a crow in gangsta rap style” → creative, fun output.

Prompt Hacks for Image Generation

LLM as a guide for image models (DALL·E, Imagen).

Workflow:

  • Text description → LLM expands into richer prompt → image generator.

Example

  • Base: “Cat on couch” → LLM refines details → precise image.
  • Creative: Ask LLM to design a prompt for “Twinkle Twinkle Little Star” → then optimise for DALL·E.

Prompt Hacking vs Prompt Engineering

Purpose

  • Prompt Hacking → Manipulate outputs, often unexpected/creative.
  • Prompt Engineering → Improve performance on specific tasks.

Approach

  • Prompt Hacking → Experimental, playful.
  • Prompt Engineering → Systematic, structured.

Application

  • Prompt Hacking → Humour, creativity.
  • Prompt Engineering → Translation, question answering, coding tasks.

Tips for Powerful Prompt Hacking

  • Be creative — try unusual instructions.
  • Be specific & clear — reduce ambiguity.
  • Learn model capabilities/limits.
  • Experiment often — iteration leads to the best results.

Conclusion

  • Prompt hacking = a creative, experimental way to maximise LLMs.
  • Works for text + images.
  • Complements prompt engineering.
  • Key = practice + experimentation.

Leave a comment