Take tour   Speak with an expert
Take tour Speak with an expert

Advanced Prompting Techniques: Accessing True AI Feature Richness

Jesse Reiss

CTO & Co-Founder

Introduction

The advent of Large Language Models (LLMs) like GPT-4 has transformed how businesses can leverage AI for various applications. However, the true potential of these models extends far beyond basic prompt-response interactions. By employing advanced prompting techniques and developing sophisticated prompt architectures, businesses can build AI features that are more effective, accurate, and aligned with specific needs. Of course, prompting is only one part of the equation, and our AI products are composed of a complex architecture involving agent chaining and retrieval-augmented generation (RAG). However, in this post we wanted to take a deep dive on the importance of prompts and how complex they can become.

The Evolution of Prompt Engineering

Prompt engineering has traditionally been understood as the art of crafting the right question to get a useful response from an AI model. This view, however, is overly simplistic. The true value of prompt engineering lies in its ability to create a multi-faceted, dynamic interaction with AI. By envisioning prompts as components of a broader architecture, we can unlock more sophisticated and tailored AI solutions.


Building a Multi-Stage Prompt Architecture

One of the key concepts in advanced prompt engineering is the idea of a multi-stage prompt architecture. This approach involves creating a series of interconnected prompts, each designed to refine and build upon the responses of previous prompts. Here's a breakdown of how this works:

  1. Initial Prompt
    The process begins with a carefully crafted initial prompt designed to elicit a broad but relevant response from the AI
  2. Intermediate Prompts
    Depending on the AI’s response, a series of intermediate prompts are generated to delve deeper into specific aspects, guiding the AI through a structured exploration of the topic.
  3. Refinement Prompts
    These prompts aim to refine the responses further, isolating the most relevant information and filtering out noise.
  4. Final Output
    The cumulative result of this multi-stage process is a highly refined output that is significantly more tailored and useful than what a single prompt could achieve.

This architecture can be visualized as a network of stages, each with specific goals, context, and outputs, all contributing to a more nuanced and effective interaction with the AI.

multi_stage_prompt_eng_diagram_1

Dynamic Prompting and Context Awareness

Advanced prompting also involves dynamic prompts that adapt based on the AI's responses. This "Choose Your Own Adventure" style interaction allows the AI to navigate complex tasks more effectively by branching into different paths based on previous answers. This method is particularly useful in scenarios where the context can shift rapidly, and a static prompt-response model would fall short.

As a fairly simple example, in compliance, we may want to summarize a piece of supporting evidence associated with an investigation. The evidence may be an identity document, a CSV of transactions, a social media profile, a criminal background history, a newspaper article or something else entirely. For each of those types of evidence, we’d be looking for different information to support our inquiry. By taking into account the type of investigation, the file name, the contents of the file, and any other context in the case, we can first leverage the AI to determine the nature of the document we’re dealing with. We’re then better equipped to ask the AI to summarize the file and include the most relevant details.

multi_stage_arch_doc_classifier

Integrating Multiple Models

Another advanced technique involves switching between different AI models depending on the task at hand. Different models have unique strengths and can be more effective for specific types of queries. By implementing a system that dynamically selects the best model for a given prompt, businesses can enhance the accuracy and relevance of the AI's responses.

This approach can also involve using simpler machine learning models to preprocess or filter data before it is passed to a more complex LLM, ensuring that the input is optimally prepared for the AI to generate the best possible output. This approach is often referred to as “agent chaining”, which leads us to our next concept.

Putting it All Together: AI Agents

So far, we’ve covered multi-prompt strategies that are hardcoded based on the particular application. These prompt networks can adapt based on awareness of the context, but overall they are implemented as deterministic paths. Could we instead employ an AI to define the strategy? What would it look like if we first used an LLM to define the set of prompts and actions that need to be taken?

This is the core idea behind AI Agents: rather than hardcoding a network of prompts, we start by asking the AI to generate a plan and then leverage a combination of concrete actions and AI prompts to iterate through that plan. In standard prompt-response models, the AI is used to answer questions or generate content. In the agent model, the AI is used as a reasoning engine to determine which actions to take and in which order. Subsequent actions may also leverage AI in a prompt-response mode, but generally, the AI is used for higher level reasoning about how to solve the problem at hand.

Real-World Applications and Benefits

The implications of these advanced techniques are vast, especially for industries that require high levels of precision and reliability, such as compliance, finance, and healthcare. For instance, in compliance work, AI can be used to summarize complex documents, identify relevant regulatory changes, and flag potential compliance issues with a high degree of accuracy.

By employing advanced prompting methods, businesses can develop AI features that:

  • Improve Reliability
    Multi-prompt architectures create more tailored and fine-tuned LLMs. These models generate more consistent and dependable outputs, capable of meeting the needs of function-specific applications.
  • Improve Contextual Awareness
    AI features employing advanced prompting techniques allow a human operator to get to the heart of even complicated matters faster and more efficiently.
  • Improve Usability
    Advanced prompting techniques allow for the development of more wholly integrated AI solutions, allowing for AI functionality that does not depend on a chatbot-style prompted interaction.

Moving Forward: Building Trust and Adoption

To fully realize the benefits of advanced prompt engineering, it's crucial to build trust in these AI systems. Users need to feel confident that the AI's outputs are reliable and accurate. This trust can be fostered by demonstrating the effectiveness of multi-stage prompt architectures and dynamic interactions through real-world applications and case studies.

To help accelerate this trust building at Hummingbird, we’re leveraging industry leading approaches like Chain of Verification (CoV) prompting to reduce hallucination and increase accuracy. In this methodology, the AI is used to validate itself: Given an initial prompt and response, the AI is asked to generate a set of questions that will validate the response. These questions are then fed into the AI and the responses are collected. Finally, these verification questions and their responses can be used to refine the original response and ensure it’s validity.

We’re also beginning to experiment with the idea of directly linking evidence and proof points in our AI model’s output to the sections of the review from which it was drawn. This will allow an investigator not just to know, generally, that the evidence presented by the model had come from their case data, but to easily see exactly where a piece of evidence had come from. Our work here is still in its early phases, but it has immense potential to help narrow the trust gap that exists for compliance professionals working with AI tools and features.


Conclusion

Advanced prompt engineering represents a significant leap forward in how we interact with and harness the power of LLMs. By moving beyond simple prompt-response models and embracing multi-stage, dynamic, context-aware, and agent oriented architectures, businesses can develop AI features that are more accurate, efficient, and aligned with specific needs. This evolution in prompt engineering not only enhances the capabilities of AI but also paves the way for more innovative and effective applications across industries. As we continue to explore and refine these techniques, the potential for AI to transform business operations will only grow, driving a new era of technological advancement and efficiency.

Stay Connected

Subscribe to receive new content from Hummingbird