Black Friday Deal: Take $250 off any 2024 workshop with code: BF2024

Cyber Week Savings: Take $2,025 off any bootcamp or short course starting before 3/31

Cyber Week Savings, Extended: Take $2,025 off any bootcamp or short course starting before 3/31

Black Friday Deal: Take £250 off any 2024 workshop with code: BF2024

Cyber Week Savings: Take £2,025 off any bootcamp starting before 31 March

Cyber Week Savings, Extended: Take £2,025 off any bootcamp starting before 31 March

Black Friday Deal: Take $250 off any 2024 workshop with code: BF2024

Cyber Week Savings: Take $1,500 off any bootcamp or short course starting before 31 March

Cyber Week Savings, Extended: Take $1,500 off any bootcamp or short course starting before 31 March

Get ahead of 2025’s biggest tech talent shifts. Register for our December 11th webinar.

Get More Info
Blog Timeless Prompt Engineering Principles to Improve AI Output Reliability
Article

Timeless Prompt Engineering Principles to Improve AI Output Reliability

General Assembly
September 20, 2024

In the rapidly evolving landscape of artificial intelligence, prompt engineering has emerged as a critical skill for anyone working with large language models (LLMs) like GPT-4 or image generation models like DALL-E 3 and Midjourney. But what exactly is prompt engineering, and why is it so important?

Prompt engineering is the art and science of crafting inputs that consistently yield desired outputs from AI models. It’s about learning to “speak AI”—understanding how to phrase your requests in a way that maximises the chances of getting accurate, relevant, and useful responses.

As AI continues to integrate into various aspects of our personal and professional lives, mastering prompt engineering becomes increasingly valuable. Whether you’re a developer or a prompt engineer building AI-powered applications, a content creator leveraging AI for ideation, or a business professional using AI tools to streamline workflows, these principles will help you get more reliable and higher-quality outputs.

Let’s dive into five timeless prompt engineering principles that will significantly improve your results when working with AI.

1. Give Clear Direction

When interacting with AI models, clarity is key. The more specific and detailed your instructions, the better the AI can understand and execute your request. This principle is about providing the right context, setting expectations, and guiding the LLM toward the desired output.

Example:

Instead of asking, “Write a story,” try:

“`

Write a 500-word short story with the following parameters:

– Genre: Science fiction

– Setting: A colony on Mars in the year 2150

– Main character: A botanist trying to grow the first Earth tree on Mars

– Conflict: Unexpected Martian microbes are attacking the tree’s root system

– Tone: Hopeful despite the challenges

– Include: At least one scene describing the Martian landscape

“`

This detailed prompt gives the LLM clear boundaries and elements to work with, resulting in a more focused and relevant output.

2. Specify Format

LLMs can generate content in various formats, from content, code to structured data. By explicitly defining the desired format, you ensure that the output is immediately and deterministically usable for your intended purpose. A good lecture to watch on this is pydantic is all you need by Jason Liu.

Example:

If you need a list of ideas in a specific format, you might use:

“`

Generate 5 ideas for eco-friendly product innovations. 

Present each idea in the following JSON format:

{

  “productName”: “Name of the product”,

  “briefDescription”: “A one-sentence description”,

  “targetMarket”: “Primary intended users”,

  “environmentalImpact”: “How it benefits the environment”,

  “challengesToOvercome”: [“Challenge 1”, “Challenge 2”, “Challenge 3”]

}

“`

This format specification makes it easy to process the LLMs output programmatically or integrate it directly into a database or software application.

3. Provide Examples

One of the most powerful ways to guide an AI’s output is by providing examples. This technique, often called few-shot learning, helps the model understand the specific style, tone, or structure you’re looking for.

Example:

Let’s say you want to generate product descriptions in a particular style:

“`

Write 3 product descriptions for fictional smartphones in the style of the examples below. Each description should highlight a unique feature of the phone.

Example Product Descriptions

  1. Nexus X: Your digital revolution. Quantum-core processor for unmatched speed. Adaptive AI anticipates your needs. Battery outlasts your longest days. Always connected, always ahead.
  2. LuminaPro: Illuminate your world. Holographic display brings content to life. Photonic sensors capture vivid memories in any light. Quantum encryption for unbreakable security. Brilliance in your hand.
  3. EcoSync: Sustainable innovation. Recycled materials, air-purifying shell. Solar-enhanced screen for constant power. Carbon-negative apps offset your digital footprint. Connect and protect the world.

[AI generates three new descriptions based on these examples]

“`

By providing a vivid example, you’re much more likely to get outputs that match the desired tone and structure.

4. Implement Evaluation Criteria

To consistently improve your results, it’s crucial to have a way to evaluate an LLMs outputs. This can range from simple rating systems to more complex rubrics, depending on your needs.

Example:

For a task like generating marketing taglines, you might include evaluation criteria in your prompt:

“`

Generate 5 catchy taglines for a new line of sustainable, plant-based protein bars. After generating each tagline, rate it on a scale of 1-10 for each of the following criteria:

– Memorability: How likely is it to stick in someone’s mind?

– Relevance: How well does it communicate the product’s key benefits?

– Emotional Appeal: How effectively does it evoke positive feelings?

– Uniqueness: How different is it from common taglines in the health food industry?

Format:

1. [Tagline]

   Memorability: [1-10]

   Relevance: [1-10]

   Emotional Appeal: [1-10]

   Uniqueness: [1-10]

[Repeat for all 5 taglines]

“`

This approach not only gives you multiple options but also provides a built-in assessment of each output, helping you quickly identify the strongest contenders. Ideally a human would then review the outputs from the LLM and correct based on the scores, these true labels can then be used to automatically optimise prompts by using packages such as DsPy.

5. Divide Complex Tasks

When faced with a complex task, it’s often more effective to break it down into smaller, manageable steps. This principle applies equally well to prompt engineering.

Example:

Instead of asking the AI to create a comprehensive marketing plan in one go, you might structure your prompts like this:

“`

We’re going to create a marketing plan for a new eco-friendly water bottle. We’ll do this in steps. For each step, I’ll provide a specific prompt, and you’ll respond with the requested information. Ready?

Step 1: Define the target audience

Prompt: Create 3 detailed buyer personas for our eco-friendly water bottle. Include demographics, psychographics, and key pain points for each.

[AI responds]

Step 2: Unique Selling Proposition (USP)

Prompt: Based on the personas defined, craft a compelling USP for our water bottle that addresses their needs and differentiates us from competitors.

[AI responds]

Step 3: Marketing Channels

Prompt: Suggest 5 marketing channels that would be most effective for reaching our target audience. For each channel, explain why it’s suitable and provide one specific marketing tactic we could employ.

[AI responds]

… [Continue with additional steps]

“`

This step-by-step approach allows you to guide the AI through a complex process, reviewing and adjusting at each stage if necessary. Additionally this works particularly well for content creation and refining existing content or code.

Combine Art and Science to Maximise Your Prompt Engineering Prowess

By applying these five principles—giving clear direction, specifying format, providing examples, implementing evaluation criteria, and dividing complex tasks—you’ll be well on your way to becoming a skilled prompt engineer.

Remember, prompt engineering is both an art and a science. It requires creativity, critical thinking, and a willingness to experiment. As you practise these principles, you’ll develop an intuition for what works best in different situations and with different AI models.

If you’re eager to dive deeper into AI and prompt engineering, our AI for Workplace Productivity Workshop is a great next step. And if you’re looking to integrate AI into your work with data, we’ve got you covered. Our AI for Data Analysis and Visualisations Workshop offers hands-on experience and expert guidance to take your AI and data skills to the next level. You can also explore our part-time and full-time Data Analytics Bootcamps that offer AI training seamlessly integrated into the coursework. 

For those who prefer a comprehensive resource at their fingertips, check out the O’Reilly book I co-authored with Mike Taylor: Prompt Engineering for Generative AI: Future-Proof Inputs for Reliable AI Outputs.

And here are more recommended resources to check out:

Whether you’re a developer, a data pro, or simply an AI enthusiast, these resources will help you unlock the full potential of AI in your work and projects. So why wait? Start your prompt engineering adventure today, and join the community of innovators shaping the future of AI interaction.

About James Phoenix:
James Phoenix has a background in building reliable data pipelines and software for marketing teams, including automation of thousands of recurring marketing tasks. He has taught 60+ Data Science Bootcamps for General Assembly.

LET’S CONNECT

What’s your reason for connecting? *

By providing your email, you confirm you have read and acknowledge General Assembly’s Privacy Policy and Terms of Service.