Prompt Engineering
Despite their recent ascendence and popularity, artificial intelligence (AI) language models lack common-sense knowledge. As a result, they struggle with scenarios that require contextual understanding or common-sense reasoning. These limitations arise from the dependency of these models on the training data used to train them. As a result, they struggle with contextual understanding. A technique used tackle this limitation is prompt engineering.
Prompts serve as a means of communicating the user’s intent and specifying context for an AI model.
A prompt is simply a natural language text describing the task that an AI model is tasked with performing. In the world of AI, particularly in the realm of natural language processing (NLP), prompts play a pivotal role in shaping how AI models like ChatGPT generate response to user inputs. The process of crafting these prompts is known as prompt engineering. Prompt engineering enables language models to generate more accurate and contextually relevant responses, thereby enhancing their problem-solving capabilities.
Prompt engineering leverages the marvel of in-context learning (ICL). In this paradigm, an AI model (large language model – LLM) learns to solve a new task at inference time without having to be re-trained. In-context learning is as an efficient alternative to fine-tuning a pre-trained language model on a task-specific dataset. Unlike fine-tuning, in-context learning does not entail updating the model parameters. Instead, this approach relies on prompts to prepare the model for context-specific inference, lessening dependency of narrow task specific and fine-tuned dataset.
How it works
We already engage in a basic form of prompt engineering when interacting with voce assistances like Alexa, Siri, or Google. The way you phrase a question or command has significant impact on the response you receive. Asking Siri: “Can you recommend a good restaurant” will produce a different list that asking Siri: “Can you recommend a good Chinese restaurant.
Similarly, a well-crafted prompt will produce more context relevant response. For example, if we were to ask ChatGPT the following: “Write a marketing email for a new line of luxury watches.” This question is very broad and will yield a generic response. To improve the quality and specificity of the email generated by the AI ChatGPT, prompt engineering can be applied.
Engineered Prompt: “Compose a persuasive marketing email introducing our new collection of luxury watches to affluent professionals. Highlight the craftsmanship, timeless design, and limited availability to create a sense of exclusivity.”
The above crafted prompt incorporates, three key factors of specificity, guidance, and context.
Specificity: The prompt clearly outlines the target audience (“affluent professionals”) and the key selling points (“craftsmanship,” “timeless design,” and “limited availability”).
Guidance: The prompt instructs the AI model to focus on creating a persuasive email, guiding it towards a more effective response.
Context: It sets the context for the email, emphasizing the luxury and exclusivity of the product.
By using this well-crafted prompt, ChatGPT is more likely to generate a marketing email that aligns with the marketing objectives, speaks directly to the intended audience, and effectively showcases the product’s unique selling points. This demonstrates how prompt engineering can significantly influence the quality and relevance of AI-generated content.
So where do I think the puck will be?
Don’t chase that hockey puck, skate to where it is going to be.
The unfortunate truth is that responses from AI models no matter how large they are, aren’t always correct. By improving understanding of context, and incorporating domain-specific knowledge, prompt engineering enhances the accuracy of responses from AI models. However, the efficacy of prompts is contingent upon specific AI models, their utility across diverse AI models is limited. How one prompts ChatGPT, is different from how one prompts Stable Diffusion. The World Economic Forum has prompt engineering as the number one emerging job of the future (see). However, the pace at which AI is evolving is bound to produce models that are more adept at understanding natural language and context, reducing the need for crafted prompts.
Even the most sophisticated prompts will fall short. Prompt engineering is focused on crafting optimal input to mitigate context related deficiencies in today’s AI models. I’d rather focus on techniques aimed at building better LLMs.