Deciphering Chain-of-Thought: Probability, Memorization, & Noisy Reasoning (July 2024)
AI Paper Podcasts AI Paper Podcasts
109 subscribers
21 views
0

 Published On Oct 8, 2024

Title: Deciphering the Factors Influencing the Efficacy of Chain-of-Thought: Probability, Memorization, and Noisy Reasoning
Link: https://arxiv.org/abs/2407.01687
Date: 1 Jul 2024
Authors: Akshara Prabhakar, Thomas L. Griffiths and R. Thomas McCoy

Summary:

This research paper investigates the reasoning abilities of large language models (LLMs) when prompted with a technique called Chain-of-Thought (CoT) prompting. CoT prompts LLMs to generate intermediate reasoning steps before producing a final answer. The authors specifically investigate the role of probability, memorization, and noisy reasoning in CoT prompting. They use a case study involving decoding shift ciphers, a relatively simple task that allows them to control and manipulate these factors. Their results suggest that LLMs' performance when prompted with CoT reflects a combination of memorization and probabilistic reasoning, rather than solely relying on abstract generalization or shallow heuristics. They find evidence that the probability of the expected output, the model's prior experience with the task, and the number of intermediate reasoning steps all significantly influence CoT performance. They conclude that LLMs display traits of both memorization and genuine reasoning, albeit a noisy version, demonstrating the complexity of their reasoning capabilities.

Key Topics:

Chain-of-Thought, Reasoning Models, Shift Ciphers, Memorization Effects, Probabilistic Reasoning

show more

Share/Embed