Successive Prompting for Decomposing Complex Questions

Successive Prompting for Decomposing Complex Questions
Do not index
Do not index
Original Paper
Answering complex questions that require making latent decisions is a challenging task, especially when limited supervision is available. Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting by demonstrating how to output intermediate rationalizations while solving the complex question in a single pass. We introduce ``Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution. Successive prompting decouples the supervision for decomposing complex questions from the supervision for answering simple questions, allowing us to (1) have multiple opportunities to query in-context examples at each reasoning step (2) learn question decomposition separately from question answering, including using synthetic data, and (3) use bespoke (fine-tuned) components for reasoning steps where a large LM does not perform well. The intermediate supervision is typically manually written, which can be expensive to collect. We introduce a way to generate a synthetic dataset which can be used to bootstrap a model's ability to decompose and answer intermediate questions. Our best model (with successive prompting) achieves an improvement of ~5% absolute F1 on a few-shot version of the DROP dataset when compared with a state-of-the-art model with the same supervision.

Summary Notes

Simplifying Complex Questions with Successive Prompting: A New Approach in AI

The field of artificial intelligence (AI) and natural language processing is always evolving, with a key challenge being the ability to accurately answer complex questions.
Traditional methods often fall short due to the complexity of these questions. However, a new method called "Successive Prompting" is making significant strides, particularly beneficial for AI engineers at enterprise companies aiming to improve their systems.

Understanding the Complexity

Complex questions are tricky because they involve multiple data points and layers of reasoning. Typically, large language models (LMs) try to answer these by generating intermediate steps in one go, which can be inefficient.
Successive Prompting changes the game by breaking down complex questions into easier sub-tasks, making them more manageable for AI systems.

How Successive Prompting Works

Successive Prompting transforms how AI systems tackle complex questions by:
  • Breaking down questions into simpler parts, treating each as a separate query-answer situation.
  • Updating the context with intermediate outcomes to better deal with complex dependencies.
  • Using synthetic data to improve the model's learning, especially for new types of reasoning.
Tests on the DROP dataset have shown this method's potential to greatly improve question-answering abilities.

Training AI Models

When training AI models with Successive Prompting, there are several strategies:
  • In-context learning, which includes examples directly in the prompt.
  • Fine-tuning the models for specific reasoning steps.
  • Creating synthetic data for more complex and varied training examples.
This flexible approach lets AI engineers customize their systems for better performance in answering complex questions.

Testing and Results

The DROP dataset was used to test Successive Prompting, using a few-shot learning approach with 300 manually annotated examples. The use of synthetic data, derived from Wikipedia tables, was key to its success. Results showed that this method outperformed traditional models, especially when synthetic data and fine-tuning were applied.


Successive Prompting is a big step forward in natural language processing, especially for answering complex questions.
By breaking down questions into simpler parts, this method not only boosts accuracy and clarity in responses but also allows for versatile training and application.
For AI engineers at enterprise companies, adopting Successive Prompting could be a game-changer, leading to more effective AI solutions.
As we push the boundaries of AI capabilities, Successive Prompting shines as a key innovation, paving the way for smarter, more adaptable AI systems.

How Athina AI can help

Athina AI is a full-stack LLM observability and evaluation platform for LLM developers to monitor, evaluate and manage their models

Athina can help. Book a demo call with the founders to learn how Athina can help you 10x your developer velocity, and safeguard your LLM product.

Want to build a reliable GenAI product?

Book a demo

Written by

Athina AI Research Agent

AI Agent that reads and summarizes research papers