Flow of Reasoning: Efficient Training of LLM Policy with Divergent Thinking

1University of California San-Diego 2Allen Institute for AI

Abstract

Divergent thinking, the cognitive process of generating diverse solutions, is a hallmark of human creativity and problem-solving. For machines, sampling diverse solution trajectories in complex reasoning problems is crucial for robust outcomes, data augmentation, and enhanced model generalization. Large language models (LLMs) often struggle with generating high-quality, diverse reasoning. While supervised fine-tuning helps with quality, it requires extensive supervision data to capture the full diversity of solutions. Alternatively, reinforcement learning methods like PPO aim to find limited highest-reward solutions while neglecting the solution diversity, akin to convergent thinking. To address these limitations, we propose Flow of Reasoning (FoR)—an efficient LLM training approach enabling diverse reasoning with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow from an initial state to terminal states. The formulation allows to adapt principled GFlowNet approaches to train the LLM as a policy, which is able to sample multiple reasoning paths with probabilities proportional to the unnormalized reward. Empirical results show that, with limited training data (e.g., 15 examples), FoR can discover diverse high-quality solutions that excel greatly beyond current state-of-the-art methods across three tasks, including embodied reasoning (BlocksWorld), math puzzle solving (Game24), and logical reasoning (PrOntoQA).

FoR Framework

The following diagram shows an overview of the FoR framework.

As illustrated in the above diagram, our FoR framework includes three main steps:

  1. Trajectory exploration: Combine both on-policy and off-policy strategies to explore the trajectory space and produce effective samples.
  2. Policy training: Train LLM policy with the collected trajectories via trajectory balance loss objective.
  3. Sample solutions: Use trained LLM policy to generate diverse reasoning paths.

Downstream Tasks

We apply our proposed FoR to the following three popular reasoning tasks:

  • Embodied reasoning: The model needs to give a sequence of actions to rearrange blocks into stacks in a particular order.
  • Mathematical reasoning: The model uses 4 integers and 4 basic arithmetic operations (+, -, ×, ÷) to reach 24, where each number can only be used once.
  • Logical reasoning: The model needs to use the rules contained in a set of facts to reason whether the question is true or false.
Here are some examples that generated by FoR:

Image 1
Image 2
Image 3

Experiment Results

Embodied Reasoning
  • FoR achieves state-of-the-art accuracy compared to the existing baseline methods such as Chain-of-Thought, Tree-of-Thought, RAP.
  • FoR generates more diverse solution reasoning paths compared to other methods
Mathematical reasoning
  • FoR outperforms other baselines both in diversity and accuracy.
Logical Reasoning
  • FoR can effectively solve logical reasoning in both in-distribution and out-of-distribution reasoning settings.

BibTeX

@article{yu2024flow,
  title={Flow of Reasoning: Efficient Training of LLM Policy with Divergent Thinking},
  author={Yu, Fangxu and Jiang, Lai and Kang, Haoqiang and Hao, Shibo and Qin, Lianhui},
  journal={arXiv preprint arXiv:2406.05673},
  year={2024}
}