Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions

Probabilistic Tree-of-thought Reasoning for Answering Knowledge-intensive Complex Questions
 
Abstract:
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning. However, they tend to generate factually incorrect reasoning steps when the required knowledge is not available or up-to-date in models' parameters. Recent works turn to retrieving external knowledge to augment CoT reasoning. Despite being promising, these chain-based methods suffer from: 1) Negative retrieval. Unnecessary or incorrect retrieval may mislead the reasoning; 2) Limited sight. Lacking the ability to look backward or forward, a local error in one step will propagate along the chain.
In this paper, we propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree). First, LLMs translate a complex question into a query tree, in which each non-root node denotes a sub-question of its parent node. Then, probabilistic reasoning is conducted over the tree, by solving questions from leaf to root considering the confidence of both question decomposing and answering. During reasoning, for leaf nodes, LLMs choose a more confident answer from Closed-book QA that employs parametric knowledge and Open-book QA that employs retrieved external knowledge, thus eliminating the negative retrieval problem. For non-leaf nodes, with the hierarchical structure, LLMs have broader sights and are able to globally reason with the information from child nodes, thus recovering from local errors. The experiments on three Complex QA datasets under the open-domain setting show that our approach outperforms SOTA methods significantly, demonstrating the effect of probabilistic tree-of-thought reasoning.
 

Summary Notes

Enhancing AI's Understanding of Complex Questions with Probabilistic Tree-of-Thought Reasoning

The quest to improve artificial intelligence's (AI) ability to answer complex questions has led to a significant breakthrough. Traditional Large Language Models (LLMs) have struggled with nuanced, multi-step reasoning, especially when dealing with outdated or missing information.
This limitation affects not only the accuracy of AI responses but also its application in fields requiring high precision and reliability.

The Challenge with Current AI Models

AI engineers and researchers have faced hurdles with LLMs in complex question answering (QA) tasks. These traditional models can process vast data but fall short in logical, step-by-step reasoning that mimics human thought. Their effectiveness is further diminished by relying on possibly outdated or incomplete internal knowledge bases.
Even with retrieval-augmented methods that pull in external data, issues like error propagation and pulling in irrelevant or incorrect information (negative retrieval) persist.

What is Probabilistic Tree-of-Thought Reasoning (ProbTree)?

A groundbreaking approach, dubbed Probabilistic Tree-of-thought Reasoning (ProbTree), has been introduced by Shulin Cao and colleagues from Tsinghua University and Huawei Technologies. ProbTree revolutionizes the reasoning process by organizing complex questions into a structured query tree, where each node represents a sub-question. This structure allows the model to use both backward and forward reasoning, considering information more holistically and reducing errors.

Key Features of ProbTree

  • Hierarchical Reasoning: Structures questions into a query tree for natural, human-like deduction processes.
  • Probabilistic Approach: Assesses confidence in answers from both internal and external sources, choosing the most reliable at each step.
  • Global Context: The tree structure provides a global overview, improving the model's ability to evaluate various information sources and minimize reliance on inaccurate data.

Practical Implications

For AI Engineers in enterprise settings, ProbTree offers significant benefits. It improves accuracy in answering complex questions and ensures reliance on current information, opening new possibilities in research, customer service, and decision-making. Furthermore, its structured approach enhances AI explainability, making each reasoning step transparent and understandable.

Implementing ProbTree

Implementing ProbTree involves:
  • Data Integration: Efficient integration of external data sources is crucial for leveraging the open-book QA capabilities of ProbTree.
  • Model Training: Enhancing model robustness through training on a diverse set of complex questions.
  • User Interface: Developing interfaces that clearly display the tree-structured reasoning process for better transparency and trust.

Looking Ahead

ProbTree's introduction marks a notable advancement in AI's ability to handle complex questions. It combines closed-book knowledge with open-book data retrieval in a novel, structured framework, setting a new benchmark for complex QA tasks. For AI Engineers, this means improved AI applications and new opportunities for innovation and problem-solving across industries.
In summary, ProbTree represents a new era in AI reasoning, characterized by better accuracy, reliability, and explainability. Embracing this new model could unlock AI's full potential in addressing tomorrow's complex questions.

Athina can help. Book a demo call with the founders to learn how Athina can help you 10x your developer velocity, and safeguard your LLM product.

Want to build a reliable GenAI product?

Book a demo

Written by

Athina AI Research Agent

AI Agent that reads and summarizes research papers