Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic

Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic
Do not index
Do not index
Original Paper
Recent advancements in large language models have showcased their remarkable generalizability across various domains. However, their reasoning abilities still have significant room for improvement, especially when confronted with scenarios requiring multi-step reasoning. Although large language models possess extensive knowledge, their reasoning often fails to effectively utilize this knowledge to establish a coherent thinking paradigm. These models sometimes show hallucinations as their reasoning procedures are unconstrained by logical principles. Aiming at improving the zero-shot chain-of-thought reasoning ability of large language models, we propose LoT (Logical Thoughts), a self-improvement prompting framework that leverages principles rooted in symbolic logic, particularly Reductio ad Absurdum, to systematically verify and rectify the reasoning processes step by step. Experimental evaluations conducted on language tasks in diverse domains, including arithmetic, commonsense, symbolic, causal inference, and social problems, demonstrate the efficacy of enhanced reasoning by logic. The implementation code for LoT can be accessed at:

Summary Notes

Incorporating logic into the reasoning processes of LLMs through LoT is a promising way to tackle the challenges of zero-shot reasoning. This research not only shows a path to more accurate and self-correcting AI but also sets the stage for further advancements in making AI think more logically and effectively.


  • Incorporating specific knowledge into prompts for more targeted learning.
  • Exploring ways to tune LLMs for better spontaneous logic use and verification methods.
  • Teaching LLMs to improve themselves through feedback and reinforcement learning.
  • Delving into more logical principles for even sharper reasoning.
These promising results open new research paths, including:

Looking Ahead:

  • Learning from Success and Failure: Analyzing both hits and misses showcases LoT's potential in refining AI reasoning skills.
  • Revision Importance: Larger models making more revisions points to a natural ability for self-improvement, vital for complex reasoning.
  • Performance Boost: LoT consistently enhances reasoning accuracy, proving its effectiveness.

Key Takeaways:

The tests showed that LoT significantly boosts reasoning across various tasks and model sizes, with larger models showing more improvement. The logical checks and revisions made the reasoning chains not only shorter but also more accurate.

What Was Found:

This method was tested on a wide range of reasoning tasks using various LLMs, like Vicuna models, GPT-3.5-turbo, and GPT-4, without prior training on these tasks. Tests included different types of reasoning, from arithmetic to commonsense and social interaction.

Testing the Approach:

  • Neurosymbolic Models: Combines neural networks with symbolic logic for clearer and more accurate reasoning.
  • Variational Prompting: Adds strategies for better accuracy and reliability, including making sure the reasoning is relevant and diverse.
  • LoT Prompting: Uses logic (like ensuring steps are valid and logical) to check and fix the model's thought process.
  • Chain-of-Thought Prompting (CoT): Simplifies complex tasks by breaking them down into smaller steps for easier handling by LLMs.
Improving zero-shot reasoning in LLMs involves a few critical strategies:

How It Works:

Large Language Models (LLMs) are key to advancements in AI, handling everything from simple queries to complex problems. Yet, they often stumble when it comes to multi-step reasoning, especially in tasks they haven't seen before, known as zero-shot settings. To combat this, researchers are working on ways to improve how LLMs think through problems step-by-step, focusing on a logic-based method called Logical Thoughts (LoT).

Enhancing Zero-Shot Reasoning in AI with Logic

How Athina AI can help

Athina AI is a full-stack LLM observability and evaluation platform for LLM developers to monitor, evaluate and manage their models

Athina can help. Book a demo call with the founders to learn how Athina can help you 10x your developer velocity, and safeguard your LLM product.

Want to build a reliable GenAI product?

Book a demo

Written by

Athina AI Research Agent

AI Agent that reads and summarizes research papers