Generative AI Fundamentals: Prompt Engineering

Anji…
7 min readFeb 1, 2025

AI applications are revolutionizing various industries, transforming how tasks are performed, decisions are made, and problems are solved. It become a must for every engineer to enhance their skills in AI and contribute to the organization’s success.

In this article, I would like to walk you through prompt engineering concepts.

Before delving into the topic, let us understand what is prompt engineering and why it plays a crucial role when implementing AI applications.

What is Prompt Engineering?

Prompt engineering is the process of crafting or writing instructions to a Large Language Model (LLM) to produce desired outputs. The goal is to create high-quality and relevant outputs for the user queries.

The quality of the LLM output largely depends on how much information you provide in your prompt and how well the prompt is crafted.

LLM Parameters that Impact the Output

When working with Large Language Models (LLMs), several settings (parameters) significantly impact the output. Here are the key ones:

  • Temperature: controls the randomness in the responses. The range varies from 0 to 2. When the value is configured as 0 or 0.1, the response will be more deterministic and predictable. when the value is configured as a high value like 1.5 or more value, the response will be very creative and provide diverse outputs.
    It is recommended to set lower temperature values for factual, structured answers and higher values for brainstorming ideas and creating ideas.
  • Top-K Sampling: Limits the selection of words to the top-k most probable tokens. when the value is configured as 1, the next prediction value will be restricted to one. configure a lower value for more deterministic and high-value results in More diverse, unpredictable responses.
  • Top-p sampling: Very similar to top-k but more adaptive. Instead of choosing from a fixed number of words (k), it selects the smallest set of words that sum up to probability p. The range varies between 0.1 to 10. lower value yields a more deterministic response and a higher value yields a creative response.
  • Max Tokens: Controls the maximum length of the response. Setting the lower values results in concise answers whereas large value provides a more detailed response.
  • Frequency Penalty: Reduces repetition of frequently used words/phrases. The range varies from 0.0–2.0. Set a low value for a more focused and accurate response and a high value to avoid repetition.
  • Presence Penalty: Encourages new topics instead of sticking to the same context. The range varies from 0.0 to 2.0. Higher values (e.g., 1.5) lead to more exploratory and introduce new topics.
  • Stop Sequences: Defines words/phrases where the model should stop generating. Useful for controlling format and preventing unwanted text (ex: Stop at “\n” to prevent excessive output).

Key Elements of a Prompt for LLMs

A well-structured prompt is essential for getting the best results from a Large Language Model (LLM). Below are the key elements that impact prompt effectiveness:

  • Clear Instruction: Specify what you want the model to do. Use imperative verbs (e.g., “Summarize”, “Translate”, “Generate”) for better responses.
  • Context & Background: Providing relevant details improves accuracy. Specify who, what, where, and when.
    ex: “You are an AI assistant for a medical company. Explain deep learning in medical imaging to a non-technical audience.”
  • Input Data (if Needed): If asking for transformation (e.g., summarization, rewriting), provide sample data.
    ex:
    Summarize the following text in one sentence: {Insert Text Here}
  • Output Format: Define the structure of the response (e.g., bullet points, JSON, tables).
    ex: “Provide the answer in JSON format with keys: ‘title’, ‘description’, and ‘example’.”
  • Constraints (Word Limit, Style): Control response length and style.
    ex: “Explain quantum computing in 50 words using simple language.
    Common Constraints:
    * Length: “Limit to 100 words.”
    * Complexity: “Explain for a 5-year-old.”
    * Tone: “Use a formal tone.”
  • Role Assignment (Persona): Assign a role for more accurate responses.
    ex: “You are a cybersecurity expert. Explain phishing attacks to a beginner.”
  • Task-Specific Instructions: Be explicit about step-by-step actions.
    example:
    Step 1: Identify the problem.
    Step 2: Provide three possible solutions.
    Step 3: Recommend the best approach with reasoning.

Example of a Well-Structured Prompt

Scenario: You want an LLM to generate a product description.

Prompt:
”You are a professional copywriter. Write a compelling product description for a smartwatch targeting fitness enthusiasts.

  • Tone: Engaging and persuasive.
  • Length: 100 words.
  • Format: Short paragraphs with key features in bullet points.
  • Example: ‘Stay fit with the new X-Fit Smartwatch! Track your steps, monitor heart rate, and stay connected on the go. Features include…’”

Key Prompting Techniques

Zero-Shot Prompting (Basic Prompting): In zero-shot prompting, instructions are given to LLM without any examples or demonstrations. The zero-shot prompt directly instructs the model to perform a task without any additional examples to steer it. LLM responds to queries based on the trained data.
example:
Prompt: “
Explain the concept of blockchain in simple terms.
Response: “
Blockchain is a digital ledger that records transactions in a secure, decentralized manner…”
Zero-shot prompting works best for general knowledge queries, definitions, and summaries.

Few Shot Prompting
Few-shot prompting is used as a technique to enable in-context learning where we provide demonstrations in the prompt to steer the model to better performance. The examples serve as conditioning for subsequent examples where we would like the model to generate a response.
Prompt:
* Rewrite the following sentences more politely:

  1. ‘Send me the report now.’ → ‘Could you please send me the report at your earliest convenience?’
  2. ‘I don’t understand this.’ → ‘Could you clarify this for me?’
    Now, rewrite this sentence: ‘Give me the data!’”*

Response: ‘Give me the data!’ → ‘Could you please provide the data when you have a moment?’

Chain-of-Thought (CoT) Prompting: chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps. In this technique, we ask the model to explain its reasoning step-by-step before providing an answer.
Prompt:
“A farmer has 3 cows, 2 chickens, and 4 horses. How many legs are there in total? Think step by step before answering.”
Response:
“Step 1: Cows have 4 legs each → 3 × 4 = 12 legs
Step 2: Chickens have 2 legs each → 2 × 2 = 4 legs
Step 3: Horses have 4 legs each → 4 × 4 = 16 legs
Step 4: Total legs = 12 + 4 + 16 = 32 legs.”

Chain-of-thought prompting is best for Math problems, logical reasoning, and decision-making tasks.

You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding.

Role-Based Prompting (Persona Setting): Assign the model a specific role or expertise to shape responses.
Prompt: “You are an experienced cybersecurity expert. Explain phishing attacks to a 10-year-old.”
Role-based prompting is best for industry-specific responses, and targeted explanations.

Step-by-Step Instructions Prompting: In this technique, break the request into multiple steps for structured responses.
Prompt:
*Help me write an email to request a meeting. Follow these steps:

  1. Start with a polite greeting.
  2. State the purpose of the meeting.
  3. Propose two available time slots.
  4. End with a polite closing.*

Step-by-step Instructions Prompting is best for generating structured and well-organized responses.

Format-Specific Prompting: Specify a response format (JSON, tables, lists, markdown).
Prompt:
“Provide the details in JSON format with keys: ‘title’, ‘description’, and ‘example’.”

Format-Specific Prompting is best for API responses, and structured data generation.

Delimiting Prompting (Using Boundaries): Use delimiters (triple quotes, XML, markdown) to separate input data clearly.
Prompt:
“Summarize the following article:
‘’’
[Insert long text here]
‘’’”
Delimiting Prompting is best for avoiding confusion in multi-part prompts.

Negative Prompting (What NOT to Include): tell the model what to avoid in the response.
Prompt:
“Explain machine learning without using technical jargon.”

Negative Prompting is best for Simplified explanations, and content moderation.

Self-Consistency Prompting: This technique enables you to generate multiple responses and select the best.
Prompt:
“Give me three different ways to introduce myself in a business email.”
Self-consistency Prompting is best for Creative writing, and diverse responses.

Interactive Prompting (Refinement): Provide feedback on generated responses to refine results.
Prompt 1:
Write a product description for a smartwatch.”
Response 1:
“The X-Fit Smartwatch helps track fitness and stay connected…”
Prompt 2:
“Make it more engaging and highlight battery life.”
Interactive Prompting is best for the iterative improvement of responses.

Meta Prompting: It is the technique of guiding an AI model on how to generate better prompts rather than just responding to a direct query. It involves prompting the AI to create, refine, or critique prompts for optimal results.

Meta Prompting enables you to improve the clarity, specificity, and structure of prompts making the prompt more effective.
Prompt:
“How would you rewrite this prompt to make it clearer:
‘Explain AI in simple terms.’?”
Response:
“Rewrite as: ‘Provide a beginner-friendly explanation of AI in 3 bullet points with examples.’ “

Meta Prompting is best for when you would like to optimize prompts for better AI responses and effective prompt engineering.

Prompt Chaining: In this technique, multiple prompts are used sequentially, with each response feeding into the next step. It helps with complex, multi-step workflows that require iteration or refinement.LLM doesn’t complete the task in one response but instead builds upon previous outputs.
Example:

Step 1: Generate an Outline
Prompt: “Create an outline for a blog about AI in healthcare.”
Response:
Introduction
Benefits of AI in Healthcare
Challenges and Risks
Future of AI in Medicine
Step 2: Expand a Specific Section
Prompt: “Expand on ‘Benefits of AI in Healthcare’ from the outline.”
Response:
AI improves diagnosis accuracy.
AI enables predictive healthcare.
AI reduces administrative workload.
Step 3: Summarize
Prompt: “Summarize the article in 100 words.”

That’s all for today!

Thank you for taking the time to read this article. I hope you have enjoyed it. If you enjoyed it and would like to stay updated on various technology topics, please consider subscribing for more insightful content.

References:

https://cloud.google.com/discover/what-is-prompt-engineering

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Anji…
Anji…

Written by Anji…

Technology Enthusiast, Problem Solver, Doer, and a Passionate technology leader. Views expressed here are purely personal.

No responses yet

Write a response