skip to Main Content

Prompt Engineering


Colorful image outputs from a prompt to Midjourney.
“A Lego design in the style of Refik Anadol, representing a colorful Istanbul like pixels moving in the flow of waves.” -Created by Midjourney, Yavuz Kömeçoğlu

ChatGPT has been tested in many different scenarios and has been able to show very high performance in important tests. It is able to write different believable phishing messages and even generate malicious code blocks, sometimes producing output that amounted to exploitation, as well as often well-intentioned results. At this point, a new concept emerged: “Prompt Engineering.” And many are now calling it “the career of the future.”

🔮What is Prompt Engineering?

“Prompt Engineering” is the practice of guiding the language model with a clear, detailed, well-defined, and optimized prompt in order to achieve a desired output. It is now possible to see job postings for this role with an annual salary between $250k — $335k!

Prompt engineer job description
Prompt engineer job description

🐲 The Anatomy of a Prompt

Let’s try to understand the prompt structure through an interface offered by OpenAI. You can see the response ChatGPT provides to a fairly simple command like “ChatGPT is” in the example below. While this sentence isn’t actually a command with an action, it’s enough for the chatbot to make a description of what ChatGPT is. Here, the user provides the input “ChatGPT is,” (an ‘instruction’) and the language model produces the output (a ‘response’).

ChatGPT Prompt 1
Image by author

The above example actually shows the two basic elements of a prompt. The language model needs a user-supplied instruction to generate a response. In other words, when a user provides an instruction, the language model produces a response. But what other structural elements can be fed to Chat GPT?

input data with output indicator
Source: Prompt Engineering Guide
  • Instructions: This is the section where the task description is expressed. The task to be done must be clearly stated.
  • Context: A task can be understood differently depending on its context. For this reason, providing the command without its context can cause the language model to output something other than what is expected.
  • Input data: Indicates which and what kind of data the command will be executed on. Presenting it clearly to the language model in a structured format increases the quality of the response.
  • Output indicator: This is an indicator of the expected output. Here, what the expected output is can be defined structurally, so that output in a certain format can be produced.

🤹🏻‍♀️ Types of Prompts

📌Instruction Prompting

Simple instructions provide some guidance for producing useful outputs. For example, an instruction can express a clear and simple mathematical operation such as “Adding the numbers 1 to 99.”

ChatGPT prompt 3
Image by author

Or, you could try your hand at a slightly more complicated command. For example, maybe you want to analyze customer reviews for a restaurant separately according to taste, location, service, speed and price. You can easily do this with the command below:

ChatGPT Prompt 4
Image by author
ChatGPT Prompt 6
Image by author

📌Role Prompting

role prompting
Image by author

Now, let’s ask for advice again but this time we’ll assign the artificial intelligence a dentist role.

ChatGPT Prompt 9
Image by author

You can clearly see a difference in both the tone and content of the response, given the role assignment.

📌“Standard” Prompting

Prompts are considered “standard” when they consist of only one question. For example, ‘Ankara is the capital of which country?’ would qualify as a standard prompt.

standard prompting
Image by author

🧩Few shot standard prompts

Few shot standard prompts can be thought of as standard prompts in which a few samples are presented first. This approach is beneficial in that it facilitates learning in context. It is an approach that allows us to provide examples in the prompts to guide model performance and improvement.

few shot standard prompts
Image by author

📌Chain of Thought (CoT) Prompting

Chain of Thought prompting is a way of simulating the reasoning process while answering a question, similar to the way the human mind might think it. If this reasoning process is explained with examples, the AI can generally achieve more accurate results.

Comparison of models on the GSM8K benchmark
Comparison of models on the GSM8K benchmark

Now let’s try to see the difference through an example.

Chain of Thought Prompting Elicits Reasoning in Large Language Models(2022)
Source: Chain of Thought Prompting Elicits Reasoning in Large Language Models(2022)

Above, an example of how the language model should think step-by-step is first presented to demonstrate how the AI should “think” through the problem or interpret it.

🧩 “Zero Shot Chain of Thought (Zero-Shot CoT)”

“Zero Shot Chain of Thought (Zero-Shot CoT)” slightly differentiates from this approach to prompt engineering. This time, it is seen that his reasoning ability can be increased again by adding a directive command like “Let’s think step by step” without presenting an example to the language model.


Zero Shot Chain of Thought
Source: Zero Shot Chain of Thought

In the experiments, it is seen that the “Zero Shot Chain of Thought” approach alone is not as effective as the Chain of Thought Prompting approach. On the other hand, it is of great importance what the redirect command is, and at this point, it has been observed that the “Let’s think step by step” command produces more successful results than many other commands.

  • Generate Knowledge Prompting, which involves using additional information provided as part of the context.
  • Program-aided Language Model (PAL), which uses a language model to read problems and create programs as intermediate reasoning steps.
  • Self-Consistency-like approaches that can be used to improve the performance of chain-of-thought prompting in different tasks.

🪐OpenAI API Overview

🤑 For your initial trials, each user is offered $18 free credit to use for the first 3 months. In the system that works with the “pay as you go” structure, there is a pricing specific to the model you will use for the task you need. For more detailed information on pricing, you can check here.

Of these, GPT-3 models that can understand and reproduce natural language are Davinci, Curie, Babbage and Ada. On the other hand, there are Codex models that are descendants of GPT-3 that are trained with both natural language data and billions of lines of code from GitHub. For more detailed information about the models, you can review the official documentation.

  • Language models with different settings can produce very different output for the same prompt.
  • Controlling how deterministic the model is is very important when creating completions for prompts.
  • There are two basic parameters to consider here: temperature and top_p(these parameter values should be kept low for more precise answers).

🍀 Recommendations and Tips for Prompt Engineering with OpenAI API

Best practices with OpenAI
Source: Best practices for OpenAI

If you have a preferred output format in mind, we recommend providing a format example, as shown below:

Extract the entities mentioned in the text below. 
Extract the following 4 entity types: company names, people names, 
specific topics and themes.

Text: {text}

Better ✅:

Extract the important entities mentioned in the text below. 
First extract all company names, then extract all people names, 
then extract specific topics which fit the content and finally 
extract general overarching themes

Desired format:
Company names: <comma_separated_list_of_company_names>
People names: -||-
Specific topics: -||-
General themes: -||-
Text: {text}
The description for this product should be fairly short, a few sentences only, 
and not too much more.

Better ✅:

Use a 3 to 5 sentence paragraph to describe this product.
The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT.

Customer: I can't log in to my account.

Better ✅:

The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution, whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article

Customer: I can’t log in to my account.

🔸Code Generation Specific — Use “leading words” to nudge the model toward a particular pattern: 

It may be necessary to provide some hints to guide the language model when asking it to generate a piece of code. For example, a starting point can be provided, such as “import” that he needs to start writing code in Python, or “SELECT” when he needs to write an SQL query.

To dig deeper into prompt engineering, check out Comet’s newest product, LLMOps. It’s designed to allow users to leverage the latest advancement in Prompt Management and query models in Comet to iterate quicker, identify performance bottlenecks, and visualize the internal state of the Prompt Chains.

Başak Buluz Kömeçoğlu

Research Assistant at Information Technologies Institute of Gebze Technical University | Phd Candidate at Gebze Technical University.
Back To Top