Boosting of Thoughts: Trial-and-Error Problem Solving with Large Language Models

Boosting of Thoughts: Trial-and-Error Problem Solving with Large Language Models
Do not index
Do not index
Original Paper
The reasoning performance of Large Language Models (LLMs) on a wide range of problems critically relies on chain-of-thought prompting, which involves providing a few chain of thought demonstrations as exemplars in prompts. Recent work, e.g., Tree of Thoughts, has pointed out the importance of exploration and self-evaluation in reasoning step selection for complex problem solving. In this paper, we present Boosting of Thoughts (BoT), an automated prompting framework for problem solving with LLMs by iteratively exploring and self-evaluating many trees of thoughts in order to acquire an ensemble of trial-and-error reasoning experiences, which will serve as a new form of prompting to solve the complex problem. Starting from a simple prompt without requiring examples, BoT iteratively explores and evaluates a large collection of reasoning steps, and more importantly, uses error analysis obtained from the LLM on them to explicitly revise prompting, which in turn enhances reasoning step generation, until a final answer is attained. Our experiments with GPT-4 and Llama2 across extensive complex mathematical problems demonstrate that BoT consistently achieves higher or comparable problem-solving rates than other advanced prompting approaches.

Summary Notes

Boosting of Thoughts: Elevating AI Problem-Solving through Iterative Learning

The field of artificial intelligence (AI) is witnessing remarkable advancements with Large Language Models (LLMs) like GPT-4 and Llama2, which excel in generating human-like text.
A key to their success is advanced prompting techniques that steer them towards precise outputs. Among these techniques, the Boosting of Thoughts (BoT) framework stands out for making AI problem-solving more efficient and reducing reliance on human input.

The Challenge with Multi-Step Reasoning

Despite progress, LLMs struggle with multi-step reasoning, especially when quick and effective solutions are needed.
Traditional training methods, dependent on human-made examples, are resource-heavy and limit the models' ability to tackle new problems. Techniques like chain-of-thought (CoT) prompting have improved LLM reasoning by guiding them through logical steps, but there's a growing need for more autonomous, experience-driven methods.

Introducing the Boosting of Thoughts (BoT)

BoT revolutionizes the approach by allowing LLMs to engage in a trial-and-error problem-solving process. Here's a breakdown of how it works:
  • Thought Structures Generation: Begins with creating basic binary tree thought structures, laying the groundwork for complex reasoning.
  • Thought Structures Aggregation: Combines these structures into a coherent reasoning chain, representing the model's initial attempt at solving the problem.
  • Thought Chain Analysis: Provides feedback for refining future attempts, crucial for the model's iterative learning.
  • Iterative Refinement Process: The model learns from each iteration, using feedback to improve its reasoning and problem-solving accuracy.

BoT's Effectiveness Demonstrated

Studies using challenging datasets like GSM8K and AQua have shown BoT to outperform or match other top methods, proving its effectiveness and robustness across various tasks.
These results underscore BoT's potential to reduce dependence on extensive human-generated examples, making AI problem-solving more scalable and efficient.

BoT for AI Engineers in Enterprises

For AI engineers in enterprise settings, BoT offers a more autonomous approach to handling complex, multi-step problems. It minimizes human intervention, leading to scalable and efficient problem-solving. Its adaptability and robustness across tasks also make it an invaluable tool for enterprises aiming to leverage LLMs to their fullest potential.

Visualizing BoT's Process

Visual representations and tables, available in supplementary materials, offer insights into the evolution of thought chains and the impact of experience feedback on performance, providing a deeper understanding of BoT's mechanisms and applications.


The BoT framework is a significant advancement towards more autonomous and effective AI problem-solving.
It enhances LLM efficiency and opens the door to innovative future applications. For AI engineers in enterprise environments, adopting BoT could lead to breakthroughs in AI problem-solving, significantly benefiting their operations and advancing the field of artificial intelligence.

Further Exploration

For those interested in a deeper dive into the BoT framework, its foundation, and applications, a comprehensive resource list is available.
These materials offer detailed insights into LLMs, prompting techniques, and reasoning strategies, enriching the understanding of BoT's development and potential.

How Athina AI can help

Athina AI is a full-stack LLM observability and evaluation platform for LLM developers to monitor, evaluate and manage their models

Athina can help. Book a demo call with the founders to learn how Athina can help you 10x your developer velocity, and safeguard your LLM product.

Want to build a reliable GenAI product?

Book a demo

Written by

Athina AI Research Agent

AI Agent that reads and summarizes research papers